Siemens and nVent are collaborating on a mixed liquid cooling and energy reference structure designed particularly for hyperscale AI workloads, together with deployments primarily based on NVIDIA DGX SuperPOD with DGX GB200 techniques.
The structure is described as a Tier III-capable, modular blueprint that brings collectively Siemens’ industrial-grade electrical and automation techniques, NVIDIA DGX SuperPOD reference designs and nVent liquid cooling know-how. The aim is to assist operators deploy AI infrastructure extra shortly, whereas sustaining resilience and enhancing vitality effectivity at very excessive rack densities.
“We now have a long time of experience supporting clients’ next-generation computing infrastructure wants,” mentioned Sara Zawoyski, President of nVent Methods Safety.
“This collaboration with Siemens underscores that dedication. The joint reference structure will assist knowledge middle managers deploy our cutting-edge cooling infrastructure to help the AI buildout.”
In keeping with Siemens, the strategy is not only about accommodating greater energy use, but in addition about maximising helpful compute output from every watt consumed.
“This reference structure accelerates time-to-compute and maximizes tokens-per-watt, which is the measure of AI output per unit of vitality,” mentioned Ciaran Flanagan, World Head of Knowledge Heart Options at Siemens.
“It’s a blueprint for scale: modular, fault-tolerant, and energy-efficient. Along with nVent and our broader ecosystem of companions, we’re connecting the dots throughout the worth chain to drive innovation, interoperability, and sustainability, serving to operators construct future-ready knowledge facilities that unlock AI’s full potential.”
Blueprint for 100 MW hyperscale AI websites
The reference design targets 100 MW-class hyperscale AI services, the place operators are more and more turning to direct liquid cooling to deal with rising rack-level energy densities and to maintain effectivity inside acceptable limits.
By defining how electrical distribution, automation, liquid cooling and compute platforms match collectively, Siemens and nVent argue that operators can shorten design cycles, standardise interfaces and scale back deployment danger. Reference architectures of this type are already broadly utilized in different elements of the information centre stack as a approach to replicate confirmed designs at velocity.
Knowledge centres operating AI workloads are seeing a convergence of challenges: greater compute depth, tighter resilience necessities and rising stress to design for modular growth. The companions place the joint blueprint as one reply to these pressures, with fault-tolerant electrical topologies and liquid cooling built-in from the outset reasonably than added as a retrofit.
Whereas detailed technical info has but to be revealed, the structure is meant to align with NVIDIA’s DGX SuperPOD reference designs, which outline how massive clusters of AI techniques are deployed at scale. nVent’s liquid cooling know-how is built-in into that framework, whereas Siemens’ position spans energy distribution, automation and vitality administration.
On the Siemens facet, the corporate is bringing its expertise in medium and low voltage energy distribution, automation and vitality administration software program from mission-critical environments into the AI knowledge centre house. The structure is predicted to attract on IoT-enabled {hardware}, software program and digital companies that may monitor and optimise vitality utilization throughout the positioning.
nVent, in the meantime, is contributing its liquid cooling portfolio and expertise delivering high-density cooling options for world cloud service suppliers and hyperscalers. Its know-how is designed to handle the thermal load of tightly packed AI {hardware}, the place conventional air-based approaches wrestle to maintain up with escalating chip energy.
By packaging these parts right into a single reference structure, Siemens and nVent are betting that operators will be capable to transfer quicker on new AI builds, whereas nonetheless assembly Tier III-style resilience expectations and retaining an in depth eye on metrics comparable to vitality effectivity and ‘tokens-per-watt’ as AI workloads proceed to scale.
