Tremendous Micro Pc, Inc. (SMCI), a famend supplier of in depth IT options, has introduced a big growth in its NVIDIA Blackwell structure portfolio. This growth comes with the introduction and delivery of recent 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 techniques. With these state-of-the-art additions forming a core a part of Supermicro’s Information Centre Constructing Block Options (DCBBS), they set new requirements for GPU density and energy effectivity, tailor-made for hyperscale information centres and AI factories.
President and CEO Charles Liang highlights that their newest techniques present the density and power effectivity required in at the moment’s fast-paced AI infrastructure panorama. By providing the market’s most compact NVIDIA HGX B300 options, Supermicro achieves a staggering 144 GPUs in a single rack, courtesy of its direct liquid-cooling expertise, notably decreasing energy consumption and cooling prices.
The two-OU (OCP) system aligns with the 21-inch OCP Open Rack V3 (ORV3) specs, empowering as much as 144 GPUs per rack. This equates to most GPU density, significantly important for hyperscale and cloud suppliers prioritising house effectivity with out compromising serviceability. The progressive design contains environment friendly cooling options, blind-mate manifold hyperlinks, and a modular GPU/CPU tray setup. Furthermore, it propels AI workloads by leveraging eight NVIDIA Blackwell Extremely GPUs, drastically saving house and power. A single ORV3 setup accommodates as much as 18 nodes with a complete of 144 GPUs, easily scaling with NVIDIA Quantum-X800 InfiniBand switches by way of Supermicro’s cooling items.
The 4U system variant enhances this providing by sustaining its compute prowess in a standard 19-inch EIA rack, becoming for large-scale AI deployments. Because of Supermicro’s DLC-2 expertise, it captures as much as 98% of the warmth generated, enhancing energy effectivity with lowered noise and improved serviceability for dense AI clusters.
Key efficiency enhancements guarantee important beneficial properties, boasting 2.1TB of HBM3e GPU reminiscence, facilitating bigger mannequin dealing with. Each platforms vastly enhance compute material throughput as much as 800Gb/s utilizing built-in NVIDIA ConnectX-8 SuperNICs when paired with NVIDIA networking options. These enhancements velocity up AI workloads equivalent to agent-driven purposes, foundational mannequin coaching, and large-scale inference.
Supermicro’s give attention to whole value of possession, effectivity, and serviceability shines by means of. The usage of their DLC-2 tech permits information centres to optimise energy utilization by as much as 40%, scale back water utilization by way of 45°C heat water operations, and get rid of the necessity for chilled water and compressors. Pre-validated, these techniques streamline deployment, catering to hyperscale, company, and governmental associates.
The introduction extends Supermicro’s NVIDIA Blackwell portfolio, incorporating NVIDIA GB300 NVL72, NVIDIA HGX B200, and others. Every of those platforms is licensed for optimum AI utility efficiency, providing safe scalability from single nodes to complete AI infrastructures.
