This article originally appeared in AI Business.
Google, AMD, Meta, and Microsoft, together with different know-how distributors, have launched a brand new {industry} customary for AI connectivity in knowledge facilities.
The Extremely Accelerator Hyperlink (UALink) is designed to enhance efficiency and deployment flexibility in AI computing clusters housed in knowledge facilities.
UALink applies to accelerators discovered on GPUs, enabling {hardware} powering AI coaching and inference workloads to interconnect with each other extra effectively.
Model 1.0 of the usual will allow knowledge heart operators to attach as much as 1,024 accelerators in a single computing pod. It’s set to be formally adopted later this yr.
AMD, Broadcom, Cisco, Intel, and HPE additionally signed on to kind the open {industry} customary.
The businesses stated the UALink customary will allow knowledge facilities so as to add computing sources to a single occasion, permitting them to scale capability on demand with out disrupting ongoing workloads.
“Extremely-high efficiency interconnects have gotten more and more vital as AI workloads proceed to develop in dimension and scope,” stated Martin Lund, government vp of Cisco’s widespread {hardware} group. “Collectively, we’re dedicated to growing the UALink which can be a scalable and open answer accessible to assist overcome a number of the challenges with constructing AI supercomputers.”
Forrest Norrod, AMD’s basic supervisor for AMD’s knowledge heart options enterprise group, stated the work being finished by corporations within the UALink to create an open, excessive efficiency and scalable accelerator material is important for the way forward for AI.
“Collectively, we carry intensive expertise in creating large-scale AI and high-performance computing options which might be primarily based on open requirements, effectivity and sturdy ecosystem help,” Norrod stated.
The businesses that adopted the UALink customary are members of the Extremely Ethernet Consortium (UEC), an {industry} group supporting cooperation round Ethernet-based networking.
“In a really brief time period, the know-how {industry} has embraced challenges that AI and high-performance computing have uncovered,” stated J Metz, UEC’s chair. “Interconnecting accelerators like GPUs requires a holistic perspective when in search of to enhance efficiencies and efficiency. At UEC, we consider that UALink’s scale-up strategy to fixing pod cluster points enhances our personal scale-out protocol and we’re wanting ahead to collaborating collectively on creating an open, ecosystem-friendly, industry-wide answer that addresses each sorts of wants sooner or later.”
A notable absence among the many corporations who pledged themselves to the usual is Nvidia, which makes use of its personal NVLink to interconnect GPUs.