Think about connecting 1000’s of highly effective AI chips scattered in dozens of server cupboards and making them work collectively as in the event that they had been a single, large laptop. That’s precisely what Huawei demonstrated at HUAWEI CONNECT 2025, the place the corporate unveiled a breakthrough in AI infrastructure structure that would reshape how the world builds and scales synthetic intelligence programs.
As a substitute of conventional approaches the place particular person servers work considerably independently, Huawei’s new SuperPoD expertise creates what the corporate’s executives describe as a single logical machine comprised of 1000’s of separate processing models, permitting them, or it, to “study, assume, and motive as one.”
The implications prolong past spectacular technical specs, representing a shift in how AI computing energy may be organised, scaled, and deployed in industries.
The technical basis: UnifiedBus 2.0
On the core of Huawei’s infrastructure strategy is UnifiedBus (UB). Yang Chaobin, Huawei’s Director of the Board and CEO of the ICT Enterprise Group, explained that “Huawei has developed the groundbreaking SuperPoD structure based mostly on our UnifiedBus interconnect protocol. The structure deeply interconnects bodily servers in order that they will study, assume, and motive like a single logical server.”
The technical specs reveal the scope of this achievement. The UnifiedBus protocol addresses two challenges that, traditionally, have restricted large-scale AI computing: the reliability of long-range communications and bandwidth-latency. Conventional copper connections present excessive bandwidth however solely over quick distances, sometimes connecting maybe two cupboards.
Optical cables help longer vary however undergo from reliability points that develop into extra problematic the larger the space and scale. Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, said that fixing these basic connectivity challenges was important to the corporate’s AI infrastructure technique.
Xu detailed the breakthrough options when it comes to the OSI model: “We have now constructed reliability into each layer of our interconnect protocol, from the bodily layer and knowledge hyperlink layer, all the best way as much as the community and transmission layers. There’s 100-ns-level fault detection and safety switching on optical paths, making any intermittent disconnections or faults of optical modules imperceptible on the software layer.”
SuperPoD structure: Scale and efficiency
The Atlas 950 SuperPoD represents the flagship implementation of this structure, comprising of as much as 8,192 Ascend 950DT chips in a configuration that Xu described as delivering “8 EFLOPS in FP8 and 16 EFLOPS in FP4. Its interconnect bandwidth can be 16 PB/s. Which means that a single Atlas 950 SuperPoD could have an interconnect bandwidth over 10 instances greater than your complete globe’s complete peak web bandwidth.”
The specs are greater than incremental enhancements. The Atlas 950 SuperPoD occupies 160 cupboards in 1,000m2, with 128 compute cupboards and 32 comms cupboards linked with all-optical interconnects. The system’s reminiscence capability reaches 1,152 TB and maintains what Huawei claims is 2.1-microsecond latency in your complete system.
Later within the manufacturing pipeline would be the Atlas 960 SuperPoD, which is about to include 15,488 Ascend 960 chips in 220 cupboards overlaying 2,200m2. Xu stated it should ship “30 EFLOPS in FP8 and 60 EFLOPS in FP4, and include 4,460 TB of reminiscence and 34 PB/s interconnect bandwidth.”
Past AI: Basic-purpose computing purposes
The SuperPoD idea extends past AI workloads into general-purpose computing via the TaiShan 950 SuperPoD. Constructed on Kunpeng 950 processors, this method addresses enterprise challenges in changing legacy mainframes and mid-range computer systems.
Xu positioned this as notably related for the finance sector, the place “the TaiShan 950 SuperPoD, mixed with the distributed GaussDB, can function a great different, and exchange — as soon as and for all — mainframes, mid-range computer systems, and Oracle’s Exadata database servers.”
Open structure technique
Maybe most importantly for the broader AI infrastructure market, Huawei introduced the discharge of UnifiedBus 2.0 technical specs as open requirements. The choice displays each strategic positioning and sensible constraints.
Xu acknowledged that “the Chinese language mainland will lag behind in semiconductor manufacturing course of nodes for a comparatively very long time” and emphasised that “sustainable computing energy can solely be achieved with course of nodes which can be virtually out there.”
Yang framed the open strategy as ecosystem constructing: “We’re dedicated to our open-hardware and open-source-software strategy that may assist extra companions develop their very own industry-scenario-based SuperPoD options. This can speed up developer innovation and foster a thriving ecosystem.”
The corporate is to open-source {hardware} and software program elements, with {hardware} together with NPU modules, air-cooled and liquid-cooled blade servers, AI playing cards, CPU boards, and cascade playing cards. For software program, Huawei dedicated to totally open-sourcing CANN compiler instruments, Thoughts sequence software kits, and openPangu basis fashions by 31 December 2025.
Market deployment and ecosystem impression
Actual-world deployment offers validation for these technical claims. Over 300 Atlas 900 A3 SuperPoD models have already been shipped in 2025, which have been deployed for greater than 20 prospects from a number of sectors, together with the Web, finance, service, electrical energy, and manufacturing sectors.
The implications for the event of China’s AI infrastructure are substantial. By creating an open ecosystem round home expertise, Huawei is addressing the challenges of constructing aggressive AI infrastructure inside parameters set by constrained semiconductor manufacturing and availability. Its strategy allows broader {industry} participation in creating AI infrastructure options with no need entry to probably the most superior course of nodes.
For the worldwide AI infrastructure market, Huawei’s open structure technique introduces an alternative choice to the tightly built-in, proprietary {hardware} and software program strategy dominant amongst Western opponents. Whether or not the ecosystem proposed by Huawei can obtain comparable efficiency and keep business viability stays to be demonstrated at scale.
In the end, the SuperPoD structure represents greater than an incremental advance for AI computing. Huawei is proposing a basic of how large computational assets are linked, managed, and scaled. The open-source launch of its specs and parts will take a look at whether or not collaborative growth can speed up AI infrastructure innovation in an ecosystem of companions. That has the potential to reshape aggressive dynamics within the world AI infrastructure market.
See additionally: Huawei commits to coaching 30,000 Malaysian AI professionals as native tech ecosystem expands

Need to study extra about AI and large knowledge from {industry} leaders? Try AI & Big Data Expo going down in Amsterdam, California, and London. The great occasion is a part of TechEx and is co-located with different main expertise occasions, click on here for extra info.
AI Information is powered by TechForge Media. Discover different upcoming enterprise expertise occasions and webinars here.
