Saturday, 9 May 2026
Subscribe
logo
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Font ResizerAa
Data Center NewsData Center News
Search
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI & Compute > Inside Huawei’s plan to make thousands of AI chips think like one computer
AI & Compute

Inside Huawei’s plan to make thousands of AI chips think like one computer

Last updated: September 25, 2025 9:15 pm
Published September 25, 2025
Share
Eric Xu, Huawei's Rotating Chairman, during his keynote at the recent Huawei Connect 2025.
SHARE

Think about connecting 1000’s of highly effective AI chips scattered in dozens of server cupboards and making them work collectively as in the event that they had been a single, large laptop. That’s precisely what Huawei demonstrated at HUAWEI CONNECT 2025, the place the corporate unveiled a breakthrough in AI infrastructure structure that would reshape how the world builds and scales synthetic intelligence programs.

As a substitute of conventional approaches the place particular person servers work considerably independently, Huawei’s new SuperPoD expertise creates what the corporate’s executives describe as a single logical machine comprised of 1000’s of separate processing models, permitting them, or it, to “study, assume, and motive as one.”

The implications prolong past spectacular technical specs, representing a shift in how AI computing energy may be organised, scaled, and deployed in industries.

The technical basis: UnifiedBus 2.0

On the core of Huawei’s infrastructure strategy is UnifiedBus (UB). Yang Chaobin, Huawei’s Director of the Board and CEO of the ICT Enterprise Group, explained that “Huawei has developed the groundbreaking SuperPoD structure based mostly on our UnifiedBus interconnect protocol. The structure deeply interconnects bodily servers in order that they will study, assume, and motive like a single logical server.”

The technical specs reveal the scope of this achievement. The UnifiedBus protocol addresses two challenges that, traditionally, have restricted large-scale AI computing: the reliability of long-range communications and bandwidth-latency. Conventional copper connections present excessive bandwidth however solely over quick distances, sometimes connecting maybe two cupboards.

Optical cables help longer vary however undergo from reliability points that develop into extra problematic the larger the space and scale. Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, said that fixing these basic connectivity challenges was important to the corporate’s AI infrastructure technique.

See also  Huawei's AI hardware breakthrough challenges Nvidia's dominance

Xu detailed the breakthrough options when it comes to the OSI model: “We have now constructed reliability into each layer of our interconnect protocol, from the bodily layer and knowledge hyperlink layer, all the best way as much as the community and transmission layers. There’s 100-ns-level fault detection and safety switching on optical paths, making any intermittent disconnections or faults of optical modules imperceptible on the software layer.”

SuperPoD structure: Scale and efficiency

The Atlas 950 SuperPoD represents the flagship implementation of this structure, comprising of as much as 8,192 Ascend 950DT chips in a configuration that Xu described as delivering “8 EFLOPS in FP8 and 16 EFLOPS in FP4. Its interconnect bandwidth can be 16 PB/s. Which means that a single Atlas 950 SuperPoD could have an interconnect bandwidth over 10 instances greater than your complete globe’s complete peak web bandwidth.”

The specs are greater than incremental enhancements. The Atlas 950 SuperPoD occupies 160 cupboards in 1,000m2, with 128 compute cupboards and 32 comms cupboards linked with all-optical interconnects. The system’s reminiscence capability reaches 1,152 TB and maintains what Huawei claims is 2.1-microsecond latency in your complete system.

Later within the manufacturing pipeline would be the Atlas 960 SuperPoD, which is about to include 15,488 Ascend 960 chips in 220 cupboards overlaying 2,200m2. Xu stated it should ship “30 EFLOPS in FP8 and 60 EFLOPS in FP4, and include 4,460 TB of reminiscence and 34 PB/s interconnect bandwidth.”

Past AI: Basic-purpose computing purposes

The SuperPoD idea extends past AI workloads into general-purpose computing via the TaiShan 950 SuperPoD. Constructed on Kunpeng 950 processors, this method addresses enterprise challenges in changing legacy mainframes and mid-range computer systems.

See also  Driver used ChatGPT to plan attack, authorities reveal

Xu positioned this as notably related for the finance sector, the place “the TaiShan 950 SuperPoD, mixed with the distributed GaussDB, can function a great different, and exchange — as soon as and for all — mainframes, mid-range computer systems, and Oracle’s Exadata database servers.”

Open structure technique

Maybe most importantly for the broader AI infrastructure market, Huawei introduced the discharge of UnifiedBus 2.0 technical specs as open requirements. The choice displays each strategic positioning and sensible constraints.

Xu acknowledged that “the Chinese language mainland will lag behind in semiconductor manufacturing course of nodes for a comparatively very long time” and emphasised that “sustainable computing energy can solely be achieved with course of nodes which can be virtually out there.”

Yang framed the open strategy as ecosystem constructing: “We’re dedicated to our open-hardware and open-source-software strategy that may assist extra companions develop their very own industry-scenario-based SuperPoD options. This can speed up developer innovation and foster a thriving ecosystem.”

The corporate is to open-source {hardware} and software program elements, with {hardware} together with NPU modules, air-cooled and liquid-cooled blade servers, AI playing cards, CPU boards, and cascade playing cards. For software program, Huawei dedicated to totally open-sourcing CANN compiler instruments, Thoughts sequence software kits, and openPangu basis fashions by 31 December 2025.

Market deployment and ecosystem impression

Actual-world deployment offers validation for these technical claims. Over 300 Atlas 900 A3 SuperPoD models have already been shipped in 2025, which have been deployed for greater than 20 prospects from a number of sectors, together with the Web, finance, service, electrical energy, and manufacturing sectors.

See also  AI to automate banking, threaten finance jobs

The implications for the event of China’s AI infrastructure are substantial. By creating an open ecosystem round home expertise, Huawei is addressing the challenges of constructing aggressive AI infrastructure inside parameters set by constrained semiconductor manufacturing and availability. Its strategy allows broader {industry} participation in creating AI infrastructure options with no need entry to probably the most superior course of nodes.

For the worldwide AI infrastructure market, Huawei’s open structure technique introduces an alternative choice to the tightly built-in, proprietary {hardware} and software program strategy dominant amongst Western opponents. Whether or not the ecosystem proposed by Huawei can obtain comparable efficiency and keep business viability stays to be demonstrated at scale.

In the end, the SuperPoD structure represents greater than an incremental advance for AI computing. Huawei is proposing a basic of how large computational assets are linked, managed, and scaled. The open-source launch of its specs and parts will take a look at whether or not collaborative growth can speed up AI infrastructure innovation in an ecosystem of companions. That has the potential to reshape aggressive dynamics within the world AI infrastructure market.

See additionally: Huawei commits to coaching 30,000 Malaysian AI professionals as native tech ecosystem expands

Banner for the AI & Big Data Expo event series.

Need to study extra about AI and large knowledge from {industry} leaders? Try AI & Big Data Expo going down in Amsterdam, California, and London. The great occasion is a part of TechEx and is co-located with different main expertise occasions, click on here for extra info.

AI Information is powered by TechForge Media. Discover different upcoming enterprise expertise occasions and webinars here.

Source link

TAGGED: Chips, Computer, Huaweis, Plan, Thousands
Share This Article
Twitter Email Copy Link Print
Previous Article FuriosaAI Challenges GPU Market with New Server Line FuriosaAI Challenges GPU Market with New Server Line
Next Article Data Centers Face Critical Balancing Act As Grid Ages Data Centers Face Critical Balancing Act As Grid Ages
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Manulife moves AI agents into core financial workflows

Giant monetary corporations have spent years testing synthetic intelligence in small tasks, usually restricted to…

March 11, 2026

US-China tech war escalates with new AI chips export controls

The Biden administration’s last main coverage transfer landed this week with a major influence on…

January 14, 2025

US military cloud no longer backed by Microsoft’s China team

Microsoft has stopped letting engineers primarily based in China present technical assist for US navy…

July 21, 2025

Kubernetes 1.33 ‘Octarine’ Delivers Major Upgrades

The primary main replace in 2025 of the open supply Kubernetes container orchestration platform is…

April 28, 2025

Retailers examine options for on-AI retail

Huge retailers are committing extra closely to agentic AI-led commerce, and accepting some lack of…

January 26, 2026

You Might Also Like

STL launches Neuralis data centre connectivity suite in the U.S.
AI & Compute

STL launches Neuralis data centre connectivity suite in the U.S.

By saad
What is optical interconnect and why Lightelligence's $10B debut says it matters for AI
AI & Compute

What is optical interconnect and why Lightelligence’s $10B debut says it matters for AI

By saad
IBM launches AI platform Bob to regulate SDLC costs
AI & Compute

IBM launches AI platform Bob to regulate SDLC costs

By saad
The evolution of encoders: From simple models to multimodal AI
AI & Compute

The evolution of encoders: From simple models to multimodal AI

By saad

About Us

Data Center News is your dedicated source for data center infrastructure, AI compute, cloud, and industry news.

Top Categories

  • AI & Compute
  • Cloud Computing
  • Power & Cooling
  • Colocation
  • Security
  • Infrastructure
  • Sustainability
  • Industry News

Useful Links

  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

Find Us on Socials

© 2026 Data Center News. All Rights Reserved.

© 2026 Data Center News. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.