Cloud companies supplier CoreWeave has introduced it’s providing Nvidia’s GB200 NVL72 programs, in any other case often known as “Grace Blackwell,” to prospects trying to do intensive AI coaching.
CoreWeave stated its portfolio of cloud companies are optimized for the GB200 NVL72, together with CoreWeave’s Kubernetes Service, Slurm on Kubernetes (SUNK), Mission Management, and different companies. CoreWeave’s Blackwell cases scale to as much as 110,000 Blackwell GPUs with Nvidia Quantum-2 InfiniBand networking.
The GB200 NVL72 system is a large and highly effective system with 36 Grace CPUs and 72 Blackwell GPUs wired collectively to look to the system as a single, large processor. It’s used for superior massive language mannequin programming and coaching.
