Cisco has entered an more and more aggressive race to dominate AI knowledge centre interconnect know-how, changing into the most recent main participant to unveil purpose-built routing {hardware} for connecting distributed AI workloads throughout a number of amenities.
The networking big unveiled its 8223 routing system on October 8, introducing what it claims is the business’s first 51.2 terabit per second mounted router particularly designed to hyperlink knowledge centres working AI workloads.
At its core sits the brand new Silicon One P200 chip, representing Cisco’s reply to a problem that’s more and more constraining the AI business: what occurs whenever you run out of room to develop.
A 3-way battle for scale-across supremacy?
For context, Cisco isn’t alone in recognising this chance. Broadcom fired the primary salvo in mid-August with its “Jericho 4” StrataDNX change/router chips, which started sampling and in addition supplied 51.2 Tb/sec of combination bandwidth backed by HBM reminiscence for deep packet buffering to handle congestion.
Two weeks after Broadcom’s announcement, Nvidia unveiled its Spectrum-XGS scale-across network—a notably cheeky identify on condition that Broadcom’s “Trident” and “Tomahawk” change ASICs belong to the StrataXGS household.
Nvidia secured CoreWeave as its anchor buyer however offered restricted technical particulars in regards to the Spectrum-XGS ASICs. Now Cisco is rolling out its personal elements for the scale-across networking market, organising a three-way competitors amongst networking heavyweights.
The issue: AI is just too large for one constructing
To know why a number of distributors are dashing into this house, think about the dimensions of contemporary AI infrastructure. Coaching massive language fashions or working advanced AI methods requires 1000’s of high-powered processors working in live performance, producing huge quantities of warmth and consuming huge quantities of electrical energy.
Knowledge centres are hitting exhausting limits—not simply on obtainable house, however on how a lot energy they will provide and funky.
“AI compute is outgrowing the capability of even the most important knowledge centre, driving the necessity for dependable, safe connection of knowledge centres a whole bunch of miles aside,” stated Martin Lund, Government Vice President of Cisco’s Widespread {Hardware} Group.
The business has historically addressed capability challenges by two approaches: scaling up (including extra functionality to particular person methods) or scaling out (connecting extra methods throughout the similar facility).
However each methods are reaching their limits. Knowledge centres are working out of bodily house, energy grids can’t provide sufficient electrical energy, and cooling methods can’t dissipate the warmth quick sufficient.
This forces a 3rd method: “scale-across,” distributing AI workloads throughout a number of knowledge centres that could be in several cities and even totally different states. Nevertheless, this creates a brand new drawback—the connections between these amenities change into essential bottlenecks.
Why conventional routers fall brief
AI workloads behave in another way from typical knowledge centre visitors. Coaching runs generate huge, bursty visitors patterns—intervals of intense knowledge motion adopted by relative quiet. If the community connecting knowledge centres can’t take in these surges, all the pieces slows down, losing costly computing assets and, critically, money and time.
Conventional routing gear wasn’t designed for this. Most routers prioritise both uncooked pace or subtle visitors administration, however battle to ship each concurrently whereas sustaining cheap energy consumption. For AI knowledge centre interconnect functions, organisations want all three: pace, clever buffering, and effectivity.
Cisco’s reply: The 8223 system
Cisco’s 8223 system represents a departure from general-purpose routing gear. Housed in a compact three-rack-unit chassis, it delivers 64 ports of 800-gigabit connectivity—at the moment the best density obtainable in a hard and fast routing system. Extra importantly, it may possibly course of over 20 billion packets per second and scale as much as three Exabytes per second of interconnect bandwidth.
The system’s distinguishing function is deep buffering functionality, enabled by the P200 chip. Consider buffers as momentary holding areas for knowledge—like a reservoir that catches water throughout heavy rain. When AI coaching generates visitors surges, the 8223’s buffers take in the spike, stopping community congestion that might in any other case decelerate costly GPU clusters sitting idle ready for knowledge.
Energy effectivity is one other essential benefit. As a 3RU system, the 8223 achieves what Cisco describes as “switch-like energy effectivity” whereas sustaining routing capabilities—essential when knowledge centres are already straining energy budgets.
The system additionally helps 800G coherent optics, enabling connections spanning as much as 1,000 kilometres between amenities—important for geographic distribution of AI infrastructure.
Business adoption and real-world functions
Main hyperscalers are already deploying the know-how. Microsoft, an early Silicon One adopter, has discovered the structure priceless throughout a number of use circumstances.
Dave Maltz, technical fellow and company vp of Azure Networking at Microsoft, famous that “the frequent ASIC structure has made it simpler for us to increase from our preliminary use circumstances to a number of roles in DC, WAN, and AI/ML environments.”
Alibaba Cloud plans to make use of the P200 as a basis for increasing its eCore structure. Dennis Cai, vp and head of community Infrastructure at Alibaba Cloud, acknowledged the chip “will allow us to increase into the Core community, changing conventional chassis-based routers with a cluster of P200-powered units.”
Lumen can be exploring how the know-how suits into its community infrastructure plans. Dave Ward, chief know-how officer and product officer at Lumen, stated the corporate is “exploring how the brand new Cisco 8223 know-how could match into our plans to reinforce community efficiency and roll out superior companies to our prospects.”
Programmability: Future-proofing the funding
One often-overlooked side of AI knowledge centre interconnect infrastructure is adaptability. AI networking necessities are evolving quickly, with new protocols and requirements rising recurrently.
Conventional {hardware} sometimes requires substitute or costly upgrades to assist new capabilities. The P200’s programmability addresses this problem.
Organisations can replace the silicon to assist rising protocols with out changing {hardware}—necessary when particular person routing methods symbolize important capital investments and AI networking requirements stay in flux.
Safety issues
Connecting knowledge centres a whole bunch of miles aside introduces safety challenges. The 8223 contains line-rate encryption utilizing post-quantum resilient algorithms, addressing considerations about future threats from quantum computing. Integration with Cisco’s observability platforms offers detailed community monitoring to establish and resolve points rapidly.
Can Cisco compete?
With Broadcom and Nvidia already staking their claims within the scale-across networking market, Cisco faces established competitors. Nevertheless, the corporate brings benefits: a long-standing presence in enterprise and repair supplier networks, the mature Silicon One portfolio launched in 2019, and relationships with main hyperscalers already utilizing its know-how.
The 8223 ships initially with open-source SONiC assist, with IOS XR deliberate for future availability. The P200 will probably be obtainable throughout a number of platform sorts, together with modular methods and the Nexus portfolio.
This flexibility in deployment choices may show decisive as organisations search to keep away from vendor lock-in whereas constructing out distributed AI infrastructure.
Whether or not Cisco’s method turns into the business normal for AI knowledge centre interconnect stays to be seen, however the elementary drawback all three distributors are addressing—effectively connecting distributed AI infrastructure—will solely develop extra urgent as AI methods proceed scaling past single-facility limits.
The true winner could finally be decided not by technical specs alone, however by which vendor can ship probably the most full ecosystem of software program, assist, and integration capabilities round their silicon.
See additionally:

Wish to study extra about AI and massive knowledge from business leaders? Take a look at AI & Big Data Expo happening in Amsterdam, California, and London. The excellent occasion is a part of TechEx and is co-located with different main know-how occasions together with the Cyber Security Expo, click on here for extra data.
AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.
