The State of Agentic AI: Why the Industry Needs a Vendor-Independent Overlay Reference Architecture Now

As enterprise IT leaders race to integrate Agentic AI into their infrastructure, they are hitting a wall. The promise of AI agents autonomously driving operational decisions and orchestrating data movement across complex environments is tantalizing—but the practicalities of deploying such systems are proving overwhelming. Without a shared architectural blueprint, organizations are left navigating a maze of proprietary solutions, incomplete integrations, and security blind spots.

This is the moment for a vendor-independent Agentic AI Overlay Reference Architecture.

What IT Executives Are Struggling With

Agentic AI isn’t just another workload; it’s an architectural transformation. Agents need to live deep within cloud environments, on the edge of enterprise data centers, and everywhere in between. This creates a level of operational complexity that existing infrastructure wasn’t designed to handle.

IT executives are contending with:

  • Non-deterministic data flows from distributed agents
  • Unpredictable latency and processing demands
  • Fragmented control planes across networking, identity, and orchestration stacks
  • Security blind spots in stateless or tunnel-less routing scenarios
  • A lack of interoperability between vendor-specific agents

The traditional model of centralized control has reached its limits. It’s too slow, too siloed, and too brittle to handle the velocity and variety of data that Agentic AI systems require.

Why a Reference Architecture Is Essential

A reference architecture provides a shared foundation to align enterprise requirements and vendor capabilities. For IT consumers, it offers a path forward—an actionable framework for building, testing, and procuring AI-native infrastructure components. For suppliers, it clarifies what to build and how to ensure interoperability, creating a healthier, more collaborative ecosystem.

This is not about standardization. It’s about acceleration.

The ONUG Collaborative’s Agentic AI Overlay Working Group is leading this charge. Its goal is to define a reference architecture rooted in two primary capabilities: operational intelligence and data orchestration. These form the core of what enterprises need to run Agentic AI systems at scale.

The Inference Optimization Challenge

One of the thorniest challenges in Agentic AI deployment is inference optimization. In today’s enterprise environments, inference tasks often suffer from inconsistent compute placement, network congestion, and poor data locality. This leads to increased latency, degraded performance, and higher costs.

In large organizations, the problem is compounded by uneven access to inference resources. One business unit may have zero utilization of its AI accelerators, while another may be operating at 100%. There is no dynamic optimization scheme in place today to rebalance these workloads across the enterprise. As a result, agents owned by a business unit running at full capacity are blocked or throttled, even though idle inference capacity exists elsewhere in the organization. This inefficiency not only wastes expensive compute but also directly impairs agent responsiveness and overall system performance.

The challenge extends beyond raw compute. During recent discussions within the ONUG Collaborative, several IT leaders raised concerns about agents sharing memory—whether across business units, clouds, or vendors. While shared memory may theoretically accelerate learning and response times, it introduces substantial risks around data privacy, sovereignty, and operational control. Without tight security and governance frameworks, such sharing could expose enterprises to compliance violations or data breaches.

The question of agent control looms large. In distributed agent environments, who enforces behavior, audits decisions, or disables rogue agents? How is access revoked when compromised? These concerns make identity, policy enforcement, and end-to-end encryption fundamental to agent communication and orchestration.

To address these issues, the working group is exploring protocols and architectural primitives for trusted, real-time coordination. Key among them are the Modal Context Protocol (MCP) for dynamic context-sharing between applications and agents, and Agent-to-Agent (A2A) protocols for encrypted, trusted peer communication. Additional efforts are underway to define stateless overlay routing, agent directories, and telemetry-aware control systems that collectively create a programmable, secure data exchange for Agentic AI.

The Vision: A Secure, Autonomous Overlay

The Agentic AI Overlay isn’t just about faster networks or smarter routing; it’s about redefining the enterprise fabric itself. Instead of managing infrastructure as a static set of pipes and endpoints, enterprises need a dynamic, AI-native data exchange—one that can prioritize flows, enforce policy, and orchestrate workloads in real time.

This architecture must:

  • Support distributed agents with local decision-making capabilities
  • Enable federated communication across vendor domains
  • Run under a shared policy framework informed by business logic, compliance, and risk profiles
  • Be modular, extensible, and vendor-agnostic

The end goal is not just interoperability; it’s agility. Enterprises must be able to deploy AI-native workloads confidently, securely, and at scale.

Call to Action: Help Shape the Future

This is your opportunity to influence the direction of enterprise AI infrastructure. The ONUG Collaborative’s Agentic AI Overlay Working Group is actively seeking participants—especially from IT organizations—to contribute to the development of the reference architecture.

By participating, you ensure that your organization’s requirements are heard and reflected in the architecture. You gain early visibility into emerging patterns and technologies. And you help shape an open ecosystem that benefits suppliers and consumers alike.

Join us in defining the future of Agentic AI infrastructure.

👉 To participate, go here and simply sign up.

Author's Bio

Nick Lippis

Co-Founder & Co-Chair, ONUG

Co-Founder & Co-Chair, ONUG