Networking in 2030: The Agentic AI Vision for the Enterprise

This blog summarizes a session at The AI Networking Summit presented by ONUG, Nvidia and Cisco.  For the full recording of this session, visit here

The future of networking, accelerated by the rise of AI, is rapidly approaching the enterprise. While today’s headlines focus on the massive, power-hungry AI clusters of hyperscalers, those literally buying nuclear power plants for energy, the true transformation for most large organizations will be a shift to owning and operating their own AI infrastructure on-premises.

This shift, projected by firms like UBS to accelerate around 2028-2029, is driven by fundamental concerns: security, IP protection, compliance, and performance needs. As Cisco’s Tom Gillis and NVIDIA’s Kevin Deierling discussed at The AI Networking Summit in NYC this past October, the networking infrastructure must evolve to support this transition.

Defining the AI Stack for the Enterprise

In the 2030 enterprise data center, the network will bifurcate:

  1. The Front-End Network: This is the traditional network you know today, handling general-purpose compute, storage, firewalls, and load balancers.
  2. The Back-End Network (The AI Fabric): This is a new class of network dedicated to high-speed, low-latency GPU-to-GPU communication. It needs to be deterministic, extremely high-throughput, and intolerant of any fluctuations—behaving more like a grown-up PCI Express bus than a general-purpose network.

The challenge for vendors like Cisco and NVIDIA is delivering the incredible performance required for AI workloads while adhering to the real-world constraints of the enterprise data center, namely, limited space, power, and cooling (avoiding the “lunatic fringe” of liquid-cooled, 800kW racks). This collaboration focuses on building air-cooled, power-efficient, integrated solutions that make AI consumable for large organizations.

Simplifying Deployment and Management

For enterprises starting small—perhaps with a thousand or even just a few dozen GPUs—the path to adoption must be simplified. The integration between Cisco and NVIDIA aims to deliver turnkey solutions, moving beyond complex “reference architectures.”

This simplification includes eliminating hardware appliances like firewalls and load balancers, replacing them with software running in a distributed, integrated rack-scale system. Furthermore, the massive volume of metadata generated by AI applications—potentially 1,000 times greater than today’s apps—necessitates a new approach to logging and analysis, leveraging local, low-cost storage repositories federated by tools like Splunk.

Crucially, Kubernetes is positioned to become the central orchestration layer, but with a twist: AI agents will sit on top, abstracting away the complexity of managing 140+ configuration parameters. This agentic AI will automate provisioning, allowing IT staff to specify workload requirements (like uptime and response time) at a service level, letting the AI calculate the optimal hardware and network configuration.

The Agentic Edge and the IP Challenge

AI is also set to enable the edge computing wave that failed to fully materialize previously. No longer just latency-driven, the new edge will be intelligent, data-driven, and protection-driven. Content Delivery Networks (CDNs) will evolve into Generative Distribution Networks, pushing inferencing and personalized experiences—like a “hyper-individualized web”—closer to the user to reduce core egress costs and enhance security.

Finally, the threat of intellectual property (IP) leakage when using public AI models is driving the need for more secure, on-prem AI deployment. Enterprises need to harvest their unique domain expertise into proprietary on-prem models, moving away from public services to protect their competitive advantage. The adoption of AI is no longer optional; businesses that fail to integrate AI for massive productivity gains risk being left behind by their faster, AI-enabled competitors.

For CIOs and the board, the message is clear: Stop doing endless Proofs of Concept (POCs). Start deploying air-cooled, integrated AI infrastructure now to future-proof your organization.

Author's Bio

Joann Varello

Head of Marketing, ONUG