Supermicro has introduced that its end-to-end AI knowledge middle Constructing Block Options, that are accelerated by the NVIDIA Blackwell platform, at the moment are totally accessible in manufacturing. The infrastructure parts required to scale Blackwell options with exceptional time to deployment are supplied by the Supermicro Constructing Block vary.
All kinds of liquid-cooled and air-cooled methods with a number of CPU choices are included within the portfolio. These embrace liquid-to-liquid (L2L), liquid-to-air (L2A), and improved thermal design that helps standard air cooling. Moreover, a turn-key providing with worldwide supply, skilled help, and repair is on the market for a whole knowledge middle administration software program suite, rack-level integration, full community switching and cabling, and cluster-level L12 resolution certification.
“On this transformative second of AI, the place scaling legal guidelines are pushing the boundaries of information middle capabilities, our newest NVIDIA Blackwell-powered options, developed by way of shut collaboration with NVIDIA, ship excellent computational energy,” said Charles Liang, president and CEO of Supermicro. Prospects can now assemble an structure that helps extra advanced AI workloads with exceptional effectivity because of Supermicro’s NVIDIA Blackwell GPU capabilities in plug-and-play scalable methods with superior liquid cooling and air cooling. This reaffirms our dedication to providing revolutionary, long-lasting options that spur AI growth.
The NVIDIA HGX B200 8-GPU methods from Supermicro make use of cutting-edge liquid and air cooling applied sciences. In the identical 4U type issue, the brand new 250kW coolant distribution unit (CDU) and the newly designed chilly plates greater than double the cooling capability of the earlier model. With the redesigned vertical coolant distribution manifolds (CDM), the rack-scale design now not takes up beneficial rack items and is on the market in 42U, 48U, or 52U variations. Eight methods with 64 NVIDIA Blackwell GPUs in a 42U rack are made doable by this, and twelve methods with 96 NVIDIA Blackwell GPUs in a 52U rack are made doable.
Eight 1000W TDP Blackwell GPUs could also be put in within the new air-cooled 10U NVIDIA HGX B200 system because of a revised chassis with extra thermal headroom. As much as 4 of the brand new 10U air-cooled methods, which supply as much as 15x inference and 3x coaching efficiency, could also be totally built-in in a rack and have the identical density because the earlier technology.
So as to allow a non-blocking, 256-GPU scalable unit in 5 racks or an prolonged 768-GPU scalable unit in 9 racks, the brand new SuperCluster designs combine NVIDIA Quantum-2 InfiniBand or NVIDIA Spectrum-X Ethernet networking in a centralized rack. When paired with Supermicro’s expertise in implementing the most important liquid-cooled knowledge facilities globally, this structure—which was particularly designed for NVIDIA HGX B200 methods and has native help for the NVIDIA AI Enterprise software program platform for creating and implementing production-grade, end-to-end agentic AI pipelines—presents exceptional effectivity and time-to-online for at this time’s most bold AI knowledge middle tasks.
Programs utilizing Supermicro NVIDIA HGX B200 which are both liquid-cooled or air-cooled
The effectivity and serviceability of the predecessor, which was utilized for the NVIDIA HGX H100/H200 8-GPU system, are additional improved by the newly created chilly plates and complicated tubing design of the brand new liquid-cooled 4U NVIDIA HGX B200 8-GPU system. The brand new rack-scale design with the brand new vertical coolant distribution manifolds (CDM) permits for denser structure with versatile configuration situations used for various knowledge middle environments. That is complemented by a brand new 250kW cooling distribution unit, which greater than doubles the cooling capability of the earlier technology whereas protecting the identical 4U type issue. For liquid-cooled knowledge facilities, Supermicro supplies 42U, 48U, or 52U rack configurations. Eight methods, 64 GPUs in a rack, and 256 GPU scalable items in 5 racks can be found with the 42U or 48U configuration. Essentially the most subtle AI knowledge middle deployments can use the 52U rack structure, which helps 96 GPUs in a rack and 768 GPUs scalable items in 9 racks. Supermicro additionally supplies a liquid-to-air cooling rack resolution that does not require facility water, in addition to an in-row CDU possibility for large installations.
To hurry the time to manufacturing AI, Supermicro’s NVIDIA HGX B200 computer systems natively help NVIDIA AI Enterprise software program. NVIDIA NIM microservices give companies entry to the most recent AI fashions for reliable, protected, and fast deployment on NVIDIA-accelerated infrastructure wherever, together with workstations, knowledge facilities, and the cloud.
The brand new 10U air-cooled NVIDIA B200 8-GPU system, which includes a redesigned modular GPU tray to retailer the NVIDIA Blackwell GPUs in an air-cooled atmosphere, can also be accessible for conventional knowledge facilities. The air-cooled rack design presents NVIDIA Blackwell efficiency whereas adhering to the tried-and-true, industry-leading structure of the earlier iteration, which had 4 methods and 32 GPUs in a 48U rack. For scalability throughout a high-performance compute community, all Supermicro NVIDIA HGX B200 methods include a 1:1 GPU-to-NIC ratio that helps NVIDIA ConnectX-7 NICs or NVIDIA BlueField-3 SuperNICs.
Programs which are a part of the NVIDIA-Licensed Programs program are supported by Supermicro. By this program, high NVIDIA companions combine NVIDIA GPUs, CPUs, and quick, safe networking applied sciences into methods, guaranteeing configurations which are verified for optimum efficiency, dependability, and scalability. Companies can select {hardware} options with confidence to help their accelerated computing workloads by deciding on an NVIDIA-Licensed System. Supermicro methods with NVIDIA H100 and H200 GPUs are licensed by NVIDIA.
Liquid cooling resolution for the NVIDIA GB200 NVL72 from begin to end
Supermicro’s SuperCluster resolution, which mixes Supermicro’s end-to-end liquid-cooling expertise with the NVIDIA GB200 NVL72 system, is a breakthrough in AI computing infrastructure. By combining 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs onto a single rack, the system achieves 130 TB/s of GPU communications and supplies exascale computing capabilities through NVIDIA’s largest NVLink community thus far.
Due to its adaptability, the 48U resolution could also be utilized in quite a lot of knowledge middle settings and helps each liquid-to-air and liquid-to-liquid cooling configurations. Supermicro presents a complete resolution from proof of idea to full-scale deployment with its SuperCloud Composer software program, which additionally presents administration instruments for monitoring and managing liquid-cooled infrastructure.
Full Information Middle Resolution and NVIDIA Blackwell Deployment Providers
Supermicro is a complete one-stop resolution supplier with world manufacturing scale that provides liquid-cooling applied sciences, networking options, cabling, administration software program, testing and validation, onsite set up providers, and all essential parts from proof-of-concept (PoC) to full-scale deployment. A complete, custom-designed thermal administration resolution is offered by its in-house liquid-cooling ecosystem, which incorporates cooling towers, manifolds, hoses, connectors, coolant distribution items with varied type components and capacities, and superior monitoring and administration software program. It additionally contains chilly plates which are optimized for GPUs, CPUs, and reminiscence modules. Supermicro supplies unequalled manufacturing functionality for liquid-cooled rack methods with manufacturing amenities positioned in San Jose, Europe, and Asia. This ensures constant high quality, fast supply, and a decrease complete price of possession (TCO) and environmental impression.