Thursday, 30 Apr 2026
Subscribe
logo
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Font ResizerAa
Data Center NewsData Center News
Search
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Cloud Computing > Huawei CloudMatrix AI performance beat Nvidia in internal tests
Cloud Computing

Huawei CloudMatrix AI performance beat Nvidia in internal tests

Last updated: June 20, 2025 5:05 pm
Published June 20, 2025
Share
Huawei CloudMatrix AI performance beat Nvidia in internal tests
SHARE

Huawei CloudMatrix AI efficiency has achieved what the corporate claims is a big milestone, with inside testing displaying its new knowledge centre structure outperforming Nvidia’s H800 graphics processing models in operating DeepSeek’s superior R1 synthetic intelligence mannequin, in response to a completetechnical paperlaunched this week by Huawei researchers.

The analysis, performed by Huawei Applied sciences in collaboration with Chinese language AI infrastructure startup SiliconFlow, offers what seems to be the primary detailed public disclosure of efficiency metrics for CloudMatrix384.

Nonetheless, it’s vital to notice that the benchmarks had been performed by Huawei on its methods, elevating questions on impartial verification of the claimed efficiency benefits over established trade requirements.

The paper describes CloudMatrix384 as a “next-generation AI datacentre structure that embodies Huawei’s imaginative and prescient for reshaping the inspiration of AI infrastructure.” Whereas the technical achievements outlined seem spectacular, the shortage of third-party validation means outcomes needs to be considered within the context of Huawei’s persevering with efforts to show technological competitiveness exterior of US sanctions.

The CloudMatrix384 structure

CloudMatrix384 integrates 384 Ascend 910C NPUs and 192 Kunpeng CPUs in a supernode, linked by an ultra-high-bandwidth, low-latency Unified Bus (UB).

In contrast to conventional hierarchical designs, a peer-to-peer structure permits what Huawei calls “direct all-to-all communication,” permitting compute, reminiscence, and community assets to be pooled dynamically and scaled independently.

The system’s design addresses notable challenges in creating trendy AI infrastructure, notably for mixture-of-experts (MoE) architectures and distributed key-value cache entry, thought-about important for big language mannequin operations.

Efficiency claims: The numbers in context

The Huawei CloudMatrix AI efficiency outcomes, whereas performed internally, current spectacular metrics on the system’s capabilities. To grasp the numbers, it’s useful to think about AI processing like a dialog: the “prefill” part is when an AI reads and ‘understands’ a query, whereas the “decode” part is when it generates its response, phrase by phrase.

See also  Oracle bets big on cloud as it targets $225b in sales by 2030

Based on the corporate’s testing, CloudMatrix-Infer achieves a prefill throughput of 6,688 tokens per second per processing unit, and 1,943 tokens per second when producing a response.

Consider tokens as particular person items of textual content – roughly equal to phrases or elements of phrases that the AI processes. For context, this implies the system can course of 1000’s of phrases per second on every chip.

The “TPOT” measurement (time-per-output-token) of underneath 50 milliseconds means the system generates every phrase in its response in lower than a twentieth of a second – creating remarkably quick response instances.

Extra considerably, Huawei’s outcomes correspond to what it claims are superior effectivity scores in contrast with competing methods. The corporate measures this by means of “compute effectivity” – primarily, how a lot helpful work every chip accomplishes relative to its theoretical most processing energy.

Huawei claims its system achieves 4.45 tokens per second per TFLOPS for studying questions and 1.29 tokens per second per TFLOPS for producing solutions. In perspective, TFLOPS (trillion floating-point operations per second) measures uncooked computational energy – akin to the horsepower score of a automobile.

Huawei’s effectivity claims counsel its system does extra helpful AI work per unit of computational horsepower than Nvidia’s competing H100 and H800 processors.

The corporate studies sustaining 538 tokens per second underneath the stricter timing necessities of sub-15 milliseconds per phrase.

Nonetheless, the spectacular numbers lack impartial verification from third-parties, customary apply for validating efficiency claims within the know-how trade.

Technical improvements behind the claims

The reported Huawei CloudMatrix AI efficiency metrics stem from a number of technical particulars quoted within the analysis paper. The system implements what Huawei describes as a “peer-to-peer serving structure” that disaggregates the inference workflow into three subsystems: prefill, decode, and caching, enabling every part to scale primarily based on workload calls for.

See also  Huawei in Malaysia - commitment to train 1,000's of workers in AI

The paper posits three improvements: a peer-to-peer serving structure with disaggregated useful resource swimming pools, large-scale professional parallelism supporting as much as EP320 configuration the place every NPU die hosts one professional, and hardware-aware optimisations together with optimised operators, microbatch-based pipelining, and INT8 quantisation.

Geopolitical context and strategic implications

The efficiency claims emerge in opposition to the backdrop of intensifying US-China tech tensions. Huawei founder Ren Zhengfei acknowledged not too long ago that the corporate’s chips nonetheless lag behind US opponents “by a technology,” however stated clustering strategies can obtain comparable efficiency to the world’s most superior methods.

Nvidia CEO Jensen Huang appeared to validate this throughout a current CNBC interview, stating: “AI is a parallel drawback, so if every one of many computer systems just isn’t succesful… simply add extra computer systems… in China, [where] they’ve loads of power, they’ll simply use extra chips.”

Lead researcher Zuo Pengfei, a part of Huawei’s “Genius Youth” program, framed the analysis’s strategic significance, writing that the paper goals “to construct confidence within the home know-how ecosystem in utilizing Chinese language-developed NPUs to outperform Nvidia’s GPUs.”

Questions of verification and trade influence

Past the efficiency metrics, Huawei studies that INT8 quantisation maintains mannequin accuracy corresponding to the official DeepSeek-R1 API in 16 benchmarks in inside, unverified exams.

The AI and know-how industries will doubtless await impartial verification of Huawei’s CloudMatrix AI efficiency earlier than drawing definitive conclusions.

However, the technical approaches described counsel real innovation in AI infrastructure design, providing insights for the trade, whatever the particular efficiency numbers.

See also  Nvidia CEO: Someday we'll have 1B robotic cars on the road

Huawei’s claims – whether or not validated or not – spotlight the depth of competitors in AI {hardware} and the various approaches firms take to realize computational effectivity.

(Photograph by Shutterstock )

See additionally: From cloud to collaboration: Huawei maps out AI future in APAC

<figurewp-block-image”>

Need to study extra about cybersecurity and the cloud from trade leaders? Take a look at Cyber Security & Cloud Expo happening in Amsterdam, California, and London.

Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.

Source link

TAGGED: Beat, CloudMatrix, Huawei, internal, Nvidia, performance, Tests
Share This Article
Twitter Email Copy Link Print
Previous Article Unlock the other 99% of your data - now ready for AI Unlock the other 99% of your data – now ready for AI
Next Article Fire detection and suppression market is projected to cross $3 bn by 2034 Fire detection and suppression market is projected to cross $3 bn by 2034
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

DeepSeek V3-0324 beats rival AI models in open-source first

DeepSeek V3-0324 has grow to be the highest-scoring non-reasoning mannequin on the Artificial Analysis Intelligence…

March 26, 2025

Energy park in Greater Manchester poised to boost UK economy

Eclipse Energy Optimise and Carlton Energy have sealed a joint growth settlement (JDA) to convey…

August 5, 2025

Runway’s new AI image generator Frames is here

Be part of our each day and weekly newsletters for the newest updates and unique…

January 18, 2025

AI to automate banking, threaten finance jobs

AI is remodeling the banking business, however the anticipated advantages and financial savings come at…

August 30, 2025

China’s AI future and Huawei’s long game

Ask Huawei CEO Ren Zhengfei for his tackle AI in China and the mountain of…

June 18, 2025

You Might Also Like

The role of AI in enterprise infrastructure operations
Cloud Computing

The role of AI in enterprise infrastructure operations

By saad
Keppel starts work on floating data centre in Singapore
Cloud Computing

Keppel starts work on floating data centre in Singapore

By saad
The last piece in the DC construction puzzle: Ongoing operations
Cloud Computing

The last piece in the DC construction puzzle: Ongoing operations

By saad
Google Cloud and NVIDIA logos as, at the Google Cloud Next conference, the companies outlined their hardware roadmap designed to address the cost of AI inference at scale.
AI & Compute

NVIDIA and Google infrastructure cuts AI inference costs

By saad

About Us

Data Center News is your dedicated source for data center infrastructure, AI compute, cloud, and industry news.

Top Categories

  • AI & Compute
  • Cloud Computing
  • Power & Cooling
  • Colocation
  • Security
  • Infrastructure
  • Sustainability
  • Industry News

Useful Links

  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

Find Us on Socials

© 2026 Data Center News. All Rights Reserved.

© 2026 Data Center News. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.