Ant Group is counting on Chinese language-made semiconductors to coach synthetic intelligence fashions to scale back prices and reduce dependence on restricted US know-how, in response to folks acquainted with the matter.
The Alibaba-owned firm has used chips from home suppliers, together with these tied to its mum or dad, Alibaba, and Huawei Applied sciences to coach massive language fashions utilizing the Combination of Consultants (MoE) methodology. The outcomes had been reportedly corresponding to these produced with Nvidia’s H800 chips, sources declare. Whereas Ant continues to make use of Nvidia chips for a few of its AI improvement, one sources stated the corporate is popping more and more to options from AMD and Chinese language chip-makers for its newest fashions.
The event indicators Ant’s deeper involvement within the rising AI race between Chinese language and US tech companies, significantly as corporations search for cost-effective methods to coach fashions. The experimentation with home {hardware} displays a broader effort amongst Chinese language companies to work round export restrictions that block entry to high-end chips like Nvidia’s H800, which, though not probably the most superior, continues to be one of many extra highly effective GPUs accessible to Chinese language organisations.
Ant has revealed a analysis paper describing its work, stating that its fashions, in some checks, carried out higher than these developed by Meta. Bloomberg News, which initially reported the matter, has not verified the corporate’s outcomes independently. If the fashions carry out as claimed, Ant’s efforts could symbolize a step ahead in China’s try to decrease the price of working AI functions and scale back the reliance on overseas {hardware}.
MoE fashions divide duties into smaller knowledge units dealt with by separate elements, and have gained consideration amongst AI researchers and knowledge scientists. The method has been utilized by Google and the Hangzhou-based startup, DeepSeek. The MoE idea is just like having a staff of specialists, every dealing with a part of a process to make the method of manufacturing fashions extra environment friendly. Ant has declined to touch upon its work with respect to its {hardware} sources.
Coaching MoE fashions relies on high-performance GPUs which will be too costly for smaller corporations to amass or use. Ant’s analysis centered on lowering that price barrier. The paper’s title is suffixed with a transparent goal: Scaling Fashions “with out premium GPUs.” [our quotation marks]
The path taken by Ant and the usage of MoE to scale back coaching prices distinction with Nvidia’s strategy. CEO Officer Jensen Huang has stated that demand for computing energy will proceed to develop, even with the introduction of extra environment friendly fashions like DeepSeek’s R1. His view is that corporations will search extra highly effective chips to drive income development, reasonably than aiming to chop prices with cheaper options. Nvidia’s technique stays centered on constructing GPUs with extra cores, transistors, and reminiscence.
Based on the Ant Group paper, coaching one trillion tokens – the essential models of knowledge AI fashions use to be taught – price about 6.35 million yuan (roughly $880,000) utilizing typical high-performance {hardware}. The corporate’s optimised coaching methodology diminished that price to round 5.1 million yuan by utilizing lower-specification chips.
Ant stated it plans to use its fashions produced on this approach – Ling-Plus and Ling-Lite – to industrial AI use circumstances like healthcare and finance. Earlier this yr, the corporate acquired Haodf.com, a Chinese language on-line medical platform, to additional Ant’s ambition to deploy AI-based options in healthcare. It additionally operates different AI companies, together with a digital assistant app referred to as Zhixiaobao and a monetary advisory platform generally known as Maxiaocai.
“For those who discover one level of assault to beat the world’s finest kung fu grasp, you may nonetheless say you beat them, which is why real-world software is vital,” stated Robin Yu, chief know-how officer of Beijing-based AI agency, Shengshang Tech.
Ant has made its fashions open supply. Ling-Lite has 16.8 billion parameters – settings that assist decide how a mannequin capabilities – whereas Ling-Plus has 290 billion. For comparability, estimates recommend closed-source GPT-4.5 has round 1.8 trillion parameters, in response to MIT Know-how Assessment.
Regardless of progress, Ant’s paper famous that coaching fashions stays difficult. Small changes to {hardware} or mannequin construction throughout mannequin coaching generally resulted in unstable efficiency, together with spikes in error charges.
(Photograph by Unsplash)
See additionally: DeepSeek V3-0324 tops non-reasoning AI fashions in open-source first

Wish to be taught extra about AI and large knowledge from trade leaders? Try AI & Big Data Expo going down in Amsterdam, California, and London. The great occasion is co-located with different main occasions together with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.
