Securing the World of Agentic AI: A Shift in the Threat Landscape

This blog summarizes Phil Tee’s presentation at The AI Networking Summit. For the full recording of this session, visit here

The rise of Agentic AI, AI-enabled agents building enterprise applications has sparked a technological “gold rush” and fundamentally shifted the security paradigm. As AI moves beyond simple GenAI models to complex, autonomous agents, the traditional security measures are proving inadequate, forcing a necessary re-evaluation of how we protect our digital environments.

The Agentic Difference: More Than Just GenAI

What differentiates an AI agent from a standard Large Language Model (LLM) like ChatGPT? It’s the addition of autonomy, planning, and tool-use. As one panelist at the AI Networking Summit in NY noted, agents possess a planning loop and often a critique loop, allowing them to iterate and execute complex, multi-step tasks. Crucially, in a multi-agent system, one agent can coordinate the efforts of others, essentially acting as the glue in a complex computational process.

This newfound agency introduces significant security challenges. Agents are inherently polymorphic, meaning their behavior can deviate from the intended path, leading to unintended consequences and unpredictable execution loops. Given that an LLM’s behavior is largely defined by its prompt, malicious prompt injection attacks such as sending invisible text to force an agent to exfiltrate data and become highly potent threat vectors.

A New Security Scale: Identity and Intent

When assessing the necessary shift in security strategy, opinions vary, but the consensus is that the new landscape is radically different from the past. While foundational cybersecurity principles remain, the nuances of agents introduce new complexities:

  • Identity and Access: Agents have their own identity and act on behalf of a human user. This necessitates a rigid chain of custody for identity and authorization. However, current industry practices for agent identity are largely a “lift and shift” of existing protocols (like JWD assertions). The existing Role-Based Access Control (RBAC) on external services (like Salesforce) must still apply, but the polymorphic behavior of the agent complicates this traditional model.
  • Intent-Based Security: Traditional security controls like Data Loss Prevention (DLP) often rely on pattern matching, which breaks down when dealing with agents. For example, an HR agent legitimately processing a social security number for a background check must be allowed, while an agent trying to “dig dirt” on an employee must be blocked. The security challenge shifts from blocking explicit data patterns to analyzing the intent of the natural language query, a computationally intensive and costly process.
  • Networking and Isolation: A critical step is to limit the blast radius. Experts argue that agents should not be given unrestricted access to the corporate network. Instead, they should operate within a strictly defined, isolated enclave. Furthermore, limiting agents to read-only functions where possible and constraining their actions at the tool level adds necessary guardrails against hijacking.

Navigating the AI Hype Cycle

While the hype around AI is undeniable—with many companies experimenting but few seeing immediate, tangible benefits—the long-term impact is clear. The economic output and productivity gains are too significant to ignore.

The current challenge lies not just in securing the technology, but in achieving production-readiness. Building a quick demo is easy; scaling a robust, secure, production-ready agent is a far greater hurdle. As the industry matures, the focus will shift from simple demonstrations to rigorous evaluation frameworks—building systems on AI “chops,” not just “software chops.”

The need for democratized, open-source models is also paramount to prevent a scenario where a small handful of hyperscalers control the core technology, creating a dangerous single point of failure in the global software supply chain. Ultimately, the security of Agentic AI will rely on the industry embracing Zero Trust principles and developing security controls that can manage unpredictable behavior based on intent, rather than static rules.

Author's Bio

Joann Varello

Head of Marketing, ONUG