As AI workloads explode across enterprises, networks designed for small, random, short-lived flows are being overwhelmed by the reality of modern AI traffic — massive, synchronized, long-lived flows that saturate links and expose the fragility of static architectures. Recent data shows that AI traffic is not just heavier, but fundamentally different: predominantly multicast or all-to-all, hypersensitive to latency, and able to turn traditional switch oversubscription and ECMP balancing into bottlenecks.
This panel will examine why the industry’s traditional best-effort, entropy-dependent designs can’t support AI’s unforgiving performance needs. Expect real data on what happens when a single AI job eats your bandwidth for weeks — and the hidden costs when tail latency drags out training runs by 50% or more.