AMD just lately held its first monetary analyst’s day, the place CEO Lisa Su AMD stated the seller now sees a complete addressable AI market might be over $1 trillion by 2030, doubling final 12 months’s said goal of $500 billion by 2028.
Su on this week told analysts the corporate is seeing insatiable AI demand, and added that income progress may climb to 35% per 12 months over the subsequent three to 5 years due to that want.
As well as, Su stated she expects to see its information middle income enhance 60% over the subsequent three to 5 years, up from $16 billion in 2025, and added she sees the entire addressable marketplace for AI information facilities growing to $1 trillion over the subsequent 5 years. That quantity consists of all silicon, from GPUs and CPUs to networking gear.
AMD has scored some massive information middle clients just lately, together with a 6-gigawatt cope with OpenAI and a plan to supply Oracle with 50,000 chips. It has additionally secured agreements to construct two extra excessive efficiency supercomputers on the Oak Ridge Nationwide Labs.
Specializing in the info middle, Dan McNamara, the senior vice chairman of the server CPU enterprise stated three years in the past, AMD set out a “daring imaginative and prescient” for its EPYC server CPUs, aiming to disrupt the market with superior structure, packaging, and course of applied sciences. Since then, AMD has launched two new generations of EPYC merchandise, expanded its buyer and accomplice ecosystem, and maintained a give attention to execution.
These efforts have paid off: AMD now claims the highest spot in server CPU market share, hovering round 40%. He didn’t point out that it helps when your chief competitor is self-destructing. McNamara stated that whereas AMD has loved successes in HPC, a lot of that interprets to the enterprise.
“There are very beefy workloads that it’s essential to have that efficiency for to run the enterprise,” he stated. “The Fortune 500 mainstream enterprise clients at the moment are … adopting Epyc sooner than anybody. We’ve seen a 3x adoption this 12 months. And what that does is drives again to the on-prem enterprise adoption, in order that the hybrid multi-cloud is end-to-end on Epyc.”
One of many key focus areas for AMD’s Epyc technique has been our ecosystem construct out. It has virtually 180 platforms, from racks to blades to towers to edge units, and three,000 options out there on prime of these platforms.
One of many areas the place AMD pushes into the enterprise is what it calls trade or vertical workloads. “These are the workloads that drive the tip enterprise. So in semiconductors, that’s telco, it’s the community, and the aim there’s to speed up these workloads and both driving extra throughput or drive sooner time to market or sooner time to outcomes. And we virtually double our competitors by way of sooner time to outcomes,” stated McNamara.
And it’s paying off. McNamara famous that over 60% of the Fortune 100 are utilizing AMD, and that’s rising quarterly. “We observe that very, very carefully,” he stated. The opposite query is are they getting new buyer acquisitions, clients with Epyc for the primary time? “We’ve doubled that 12 months on 12 months.”
AMD didn’t simply brag, it laid out a street map for the subsequent two years, and 2026 goes to be a really busy 12 months. That would be the 12 months that new CPUs, each shopper and server, constructed on the Zen 6 structure start to look. On the server aspect, which means the Venice technology of Epyc server processors.
Zen 6 processors will probably be constructed on 2 nanometer design generated by (you guessed it) TSMC. Zen 6 CPUs are anticipated to be socket-compatible with current AM5 motherboards, making certain backward compatibility for desktop consumer, however we’re unsure in regards to the servers additionally being backwards appropriate.
It’s anticipated to make use of superior packaging applied sciences, comparable to fanout interconnect and a brand new Infinity Material interconnect technique, which may enhance throughput speeds. Zen 6 will deliver improved Directions Per Cycle (IPC) for larger efficiency throughout each desktop and server platforms.
The structure will function expanded AI capabilities, constructing on the AI options launched in earlier generations although particulars of that plan are nonetheless sketchy.
AMD additionally took the time to element Intuition GPU accelerator plans. It introduced the Intuition MI400 sequence based mostly on the CDNA 5 structure, The identical know-how utilized in its Radeon GPU playing cards. Scheduled for launch in 2026, the MI400 goals to double the compute efficiency of the MI350, providing 20 PFLOPs of FP8 compute energy and providing 432GB of HBM4 reminiscence, up from 288 GB of HBM3 reminiscence within the earlier technology. Bandwidth goes from 8 TB/s on the 300 technology to 19.6 TB/s.
There will probably be two variants of the MI400 sequence to start out. The Intuition MI455X is designed for large-scale AI coaching and cloud deployment, whereas the MI430X is for high-performance computing and government-focused AI initiatives. The MI430X integrates native FP64 processing models and hybrid CPU+GPU assist.
And if that’s not sufficient, AMD already introduced that the Intuition MI500 sequence is already in superior design and is predicted to launch in 2027. Nothing is thought in regards to the new design.
