From GenAI to Agentic AI: Why Networks Now Define Outcomes
With artificial intelligence (AI) increasingly being embedded across enterprise applications, the state of enterprise network traffic is also being reshaped.
As AI becomes part of the operational fabric of the enterprise, new data flows that were never part of their original design are introduced. This leads to connectivity between clouds, data centres, and edge environments becoming critical. And this shift underscores a simple truth: AI is only as effective as the network that carries its data.
Hence, network performance is no longer just an infrastructure concern, it is fast becoming a direct driver of customer experience, speed to market, and business outcomes. Let’s say two agentic AI models are identical, but only one runs on a purpose-built, high-performance network, which will deliver better outcomes?
From GenAI to agentic AI: a step change for networks
The current GenAI era enables models to generate content and insights. Network traffic increases only slightly as users upload data and receive outputs, but interactions remain largely human driven and episodic.
The agentic AI wave will represent a more fundamental shift. These intelligent systems operate autonomously, executing multi‑step actions with minimal human involvement. Agents communicate continuously with other agents, systems and data sources to achieve outcome-based objectives.
This transition moves enterprise traffic from predictable, human-paced interactions to machine-paced, always-on flows. These workloads are more latency sensitive, highly distributed across clouds, data centres and the edge, and far less tolerant of disruption.
In such environments, the network can become a performance multiplier or a limiter.
What to consider in network architecture for agentic AI
As AI introduces new requirements across the network stack, legacy infrastructure designed for predictable, human-driven traffic will increasingly struggle to support intelligent systems. In practice, this translates into four core architectural considerations:
- Bandwidth demand is continuous and scalable
- Latency becomes a business-critical metric
- Resilience is imperative
- Security and sovereignty must be built in, not bolted on
Omdia’s research points to a dramatic uplift in connectivity driven by AI adoption. Both financial services and manufacturing respondents expect more than 200% increase in bandwidth for cloud/data centre, which appears to be driven by an increase of 100G circuits at large sites. Respondents also expect more than 90% increase in bandwidth in the branch, which will be driven by business-grade network services, a combination of internet and private networks.
While some AI workloads can tolerate limited delay, agentic AI distributes decision making across multi-cloud, SaaS and edge environments. In this model, inconsistent latency directly impacts operational efficiency. For a global manufacturer operating more than 30 production sites, even minor latency fluctuations can delay AI-driven decisions on the shop floor, turning milliseconds of delay into lost output and unplanned downtime.
To maximise returns on costly GPU investments and AI platforms, enterprises need to ensure minimal downtime for their AI systems. This increases the importance of multi‑path connectivity, automated failover and SLA‑backed services designed for continuous operation.
As AI drives higher data volumes across clouds, data centres and geographies, network architecture increasingly determines how data is routed, segmented and controlled. Security and data sovereignty shift from policy‑driven overlays to foundational design principles engineered into the network fabric from the outset.
Three network design priorities
As enterprises progress from GenAI to agentic AI, WAN planning must reflect the operational realities of intelligent, autonomous systems. Here are three priorities for network design enterprises should focus on:
- Adopt AI‑first network architectures
- Strengthen your multi‑cloud and edge connectivity
- Embed security into network design
AI workloads introduce sustained and highly variable demand on network traffic. Networks should be designed around AI performance profiles, with scalable bandwidth, high performance cloud on ramps for multi cloud deployments, and proactive capacity planning to avoid reactive upgrades.
AI workflows increasingly span public cloud, private and sovereign environments, edge compute nodes and data centres. WANs must provide seamless, high quality connectivity through direct cloud and SaaS peering, dynamic routing and low latency regional interconnects.
As AI models become sources of business differentiation, governance and compliance requirements intensify. Security must be engineered into the WAN, with encryption options such as MACsec for link-encryption and traffic controls like SSE to reduce exposure to shadow IT and uncontrolled data flows.
Omdia’s research makes it clear that enterprises cannot realise the full business potential of AI without modernising their network foundations. As enterprises are deploying AI at scale, networks are no longer passive enablers of connectivity. They are becoming the operational backbone for intelligent, autonomous systems.
Find out how Telstra International can help re‑architect your network for high‑bandwidth, low‑latency and resilient connectivity, with security embedded by design so your business is ready to harness AI’s accelerating impact.