Phononic, recognized for its solid-state cooling know-how, is increasing its vary of superior cooling options for networking, GPUs, and AI knowledge centres. The corporate, with deployments amongst hyperscalers, has launched the certified and deployed 1.6T HVM answer, strengthening its presence within the evolving knowledge centre thermal ecosystem.
The newest improvements embrace a GPU HBM cooling answer that seeks to enhance power effectivity, {hardware} lifespan, and computing efficiency whereas supporting robust return on funding. As demand grows for greater bandwidth, quicker efficiency, and larger power effectivity, Phononic’s options intention to handle key challenges in high-performance computing environments.
Key options embrace:
- Subsequent-Gen HBM4 Aligned GPU HBM Cooling Options: Designed for as much as 75% greater warmth dissipation, supporting sustained GPU efficiency and lowering thermal-induced throttling.
- Excessive-Efficiency Pluggable Optics for 1.6T and Past: Expanded functionality to deal with roughly 50% larger warmth masses with out rising energy consumption.
- CPO-Prepared Thermal Package: Gives focused cooling and packaging optimised for co-packaged optical engines in scalable deployments.
Phononic’s growth in CPO cooling goals to handle the thermal challenges posed by exterior laser sources and aligns with the trade shift towards co-packaged optics. This method seeks to scale back signal-path distances and related power consumption whereas supporting the dimensions and bandwidth density required for next-generation networks.
The Gen 2 GPU HBM Cooling Answer integrates Phononic’s Thermal Package as a central subsystem. With as much as 75% improved cooling, the system helps keep GPU efficiency, reduces thermal throttling, and enhances cluster stability. Designed for seamless integration into present knowledge centre {hardware}, it permits operators to regulate power effectivity, computing output, and element longevity throughout the set up.
Phononic’s options intention to supply exact, node-level thermal administration for GPU and AI workloads, supporting effectivity, efficiency, and infrastructure reliability in high-density environments.
