SuperX has introduced the discharge of the SuperX XN9160-B200 AI Server, its latest flagship product. This next-generation AI server is designed to fulfill the rising want for scalable, high-performance computing in duties associated to AI coaching, machine studying (ML), and high-performance computing (HPC). It’s powered by NVIDIA’s Blackwell structure GPU (B200).
With the XN9160-B200 AI Server, large-scale distributed AI coaching and inference workloads could also be accelerated. For coaching and inferring basis fashions utilizing reinforcement studying (RL) and distillation strategies, multimodal mannequin coaching and inference, and HPC functions like local weather modeling, drug discovery, seismic evaluation, and insurance coverage threat modeling, it’s optimized for GPU-supported duties to help intensive GPU cases. Its efficiency is corresponding to that of a standard supercomputer, offering enterprise-level capabilities in a small bundle.
The SuperX XN9160-B200 AI server, which delivers potent GPU cases and computational capabilities to hurry international AI analysis, is a serious milestone in SuperX’s AI infrastructure technique.
XN9160-B200 AI System
The brand-new XN9160-B200 would unleash extraordinary AI computing functionality in a 10U chassis with its 8 NVIDIA Blackwell B200 GPUs, fifth era NVLink expertise, 1440 GB of high-bandwidth reminiscence (HBM3E), and sixth Gen Intel Xeon CPUs.
With eight NVIDIA Blackwell B200 GPUs and fifth-generation NVLink expertise, the SuperX XN9160-B200’s core engine can ship ultra-high inter-GPU bandwidth of as much as 1.8TB/s. This dramatically reduces the R&D cycle for actions like pre-training and fine-tuning trillion-parameter fashions and hastens large-scale AI mannequin coaching by as much as 3 times. With 1440GB of high-performance HBM3E reminiscence working at FP8 accuracy, it supplies an unimaginable throughput of 58 tokens per second per card on the GPT-MoE 1.8T mannequin, which is a quantum improve in efficiency for inference. There’s a 15x increase in efficiency in comparison with the earlier era H100 platform’s 3.5 tokens per second.
All-flash NVMe storage, 5,600–8,000 MT/s DDR5 reminiscence, and sixth Gen Intel® Xeon® CPUs are important parts that energy the system. AI mannequin coaching and inference actions could also be accomplished steadily and successfully thanks to those parts, which additionally effectively velocity up knowledge pre-processing, assure seamless operation in high-load virtualization eventualities, and enhance the effectiveness of refined parallel computing.
Powering AI With out Interruption
An progressive multi-path energy redundancy expertise is utilized by the XN9160-B200 to offer excellent working dependability. With its 1+1 redundant 12V energy provides and 4+4 redundant 54V GPU energy provides, it considerably reduces the potential of single factors of failure and ensures that the system can operate steadily and constantly even within the face of unexpected occasions, supplying energy for essential AI missions with out interruption.
A built-in AST2600 clever administration system on the SuperX XN9160-B200 would permits simple distant monitoring and management. Along with different manufacturing high quality management procedures, every server is put by greater than 48 hours of full-load stress testing, hot and cold boot validation, and excessive/low temperature getting older screening to ensure reliable supply. Moreover, SuperX, an organization from Singapore, affords a full-lifecycle service assure, a three-year guarantee, and knowledgeable technical help to assist companies handle the AI wave.
