A major discount within the PUE of a knowledge centre is rapidly realized with information centre liquid-cooled servers and infrastructure, and this may cut back general energy consumption within the information centre by as much as 40%.
“Supermicro continues to work with our AI and HPC prospects to convey the most recent know-how, together with whole liquid cooling options, into their information facilities,” stated Charles Liang, President and CEO of Supermicro. “Our full liquid cooling options can deal with as much as 100 kW per rack, which reduces the TCO in information facilities and permits for denser AI and HPC computing. Our constructing block structure permits us to convey the most recent GPUs and accelerators to market, and with our trusted suppliers, we proceed to convey new rack-scale options to the market that ship to prospects with a diminished time to supply.”
Supermicro application-optimized high-performance servers are designed to accommodate essentially the most performant CPUs and GPUs for simulation, information analytics, and machine studying. The Supermicro 4U 8-GPU liquid-cooled server is in a category by itself, delivering petaflops of AI computing energy in a dense kind issue with the NVIDIA H100/H200 HGX GPUs. Supermicro will quickly ship liquid-cooled Supermicro X14 SuperBlade in 8U and 6U configurations, the rackmount X14 Hyper, and the Supermicro X14 BigTwin. A number of HPC-optimized server platforms will help the Intel Xeon 6900 with P-cores in a compact, multi-node kind issue.
As well as, Supermicro continues its management delivery the broadest portfolio of liquid cooled MGX Merchandise within the business.. Supermicro additionally confirms its help for delivering the most recent accelerators from Intel with its new Intel® Gaudi® 3 accelerator and AMD’s MI300X accelerators. With as much as 120 nodes per rack with the Supermicro SuperBlade®, large-scale HPC functions could be executed in just some racks. Supermicro will show a variety of servers on the Worldwide Supercomputing Convention, together with Supermicro X14 methods incorporating the Intel® Xeon® 6 processors.
Supermicro can even showcase and show a variety of options designed particularly for HPC and AI environments at ISC 2024. The brand new 4U 8-GPU liquid-cooled servers with NVIDIA HGX H100 and H200 GPUs spotlight the Supermicro lineup. These servers and others will help the NVIDIA B200 HGX GPUs when accessible. New methods with high-end GPUs speed up AI coaching and HPC simulation by bringing extra information nearer to the GPU than earlier generations by utilizing high-speed HBM3 reminiscence. With the unbelievable density of the 4U liquid-cooled servers, a single rack delivers (8 servers x 8 GPUs x 1979 Tflops FP16 (with sparsity) = 126+ petaflops. The Supermicro SYS-421GE-TNHR2-LCC can use twin 4th or fifth Gen Intel Xeon processors, and the AS -4125GS-TNHR2-LCC is out there with twin 4th Gen AMD EPYC™ CPUs.
The brand new AS -8125GS-TNMR2 server offers customers entry to eight AMD Intuition™ MI300X accelerators. This method additionally contains twin AMD EPYC™ 9004 Collection Processors with as much as 128 cores/256 threads and as much as 6TB reminiscence. Every AMD Intuition MI300X accelerator comprises 192GB of HBM3 reminiscence per GPU, all linked with an AMD Common Base Board (UBB 2.0). Furthermore, the brand new AS -2145GH-TNMR-LCC and AS -4145GH-TNMR APU servers are focused to speed up HPC workloads with the MI300A APU. Every APU combines high-performance AMD CPU, GPU, and HBM3 reminiscence for 912 AMD CDNA™ 3 GPU compute items, 96 “Zen 4” cores, and 512GB of unified HBM3 reminiscence in a single system.
At ISC 2024, a Supermicro 8U server with the Intel Gaudi 3 AI Accelerator will likely be proven. This new system is designed for AI coaching & Inferencing and could be immediately networked with a standard Ethernet cloth. Twenty-four 200 gigabit (Gb) Ethernet ports are built-in into each Intel Gaudi 3 accelerator, offering versatile and open-standard networking. As well as, 128GB of HBM2e high-speed reminiscence is included. The Intel Gaudi 3 accelerator is designed to scale up and scale out effectively from a single node to hundreds to fulfill the expansive necessities of GenAI fashions. Supermicro’s Petascale storage methods, that are essential for large-scale HPC and AI workloads, can even be displayed.