Rocky Linux has change into the primary enterprise Linux distribution approved to ship NVIDIA’s full AI and networking software program stack out of the field, based on CIQ, marking a significant step ahead for organizations deploying GPU-accelerated workloads throughout AI, high-performance computing, and cloud-native environments with sooner, totally validated infrastructure.
The transfer marks a milestone in enterprise and cloud-native computing, positioning Rocky Linux from CIQ (RLC) and Rocky Linux from CIQ – AI (RLC-AI) as turnkey platforms for organizations working large-scale GPU-accelerated workloads in AI, HPC, and scientific computing.
With the combination of NVIDIA’s DOCA OFED alongside the CUDA Toolkit, the corporate behind Rocky Linux – CIQ, claims that RLC and RLC-AI are actually the primary Linux distributions licensed and validated to incorporate NVIDIA’s full AI and networking software program ecosystem. This integration allows builders and enterprises to maneuver from set up to operational AI inference as much as 9 instances sooner, primarily based on CIQ’s inside benchmarks.
Rocky Linux, initially based as an open-source various to CentOS, has quickly developed right into a most well-liked basis for high-performance, enterprise-grade computing. CIQ’s enhanced providing represents a shift from conventional open-source distributions towards totally validated platforms that may deal with GPU-accelerated and multi-node workloads at scale. For enterprises, the implications are vital: a single, ready-to-run surroundings that eliminates the time-consuming strategy of manually putting in and validating GPU drivers, libraries, and community interfaces.
Trendy AI and high-performance workloads are more and more constrained not by {hardware}, however by the complexity of deploying and scaling GPU-enabled software environments. As organizations develop from single-node experimentation to clusters with hundreds of GPUs, challenges come up round driver compatibility, community optimization, and safety compliance. Enterprise-grade servers reminiscent of Dell’s PowerEdge XE9680 and HPE’s Cray XD techniques depend upon applied sciences like DOCA OFED, RDMA, and IDPF to keep up environment friendly GPU-to-GPU communication throughout nodes – all of which are actually supported natively inside CIQ’s Rocky Linux distributions.
By delivering pre-built, validated photos that embrace CUDA, DOCA, and all supporting dependencies, RLC and RLC-AI would enable enterprises to scale back surroundings setup time from half-hour to only three. The result’s a “download-and-deploy” mannequin that transforms Rocky Linux from a bare-metal working system into what CIQ describes as a “developer equipment” for AI infrastructure.
Demand for AI-Prepared Infrastructure
Gregory Kurtzer, founder and CEO of CIQ, mentioned the corporate’s aim is to take away friction between builders and accelerated computing environments. “Should you’re constructing functions that leverage accelerated computing, Rocky Linux from CIQ is now the plain alternative,” mentioned Mr. Kurtzer. “We’ve eliminated each barrier between builders and GPU efficiency. With the entire, validated NVIDIA stack built-in instantly into Rocky Linux from CIQ, groups can focus solely on innovation somewhat than infrastructure.”
For enterprise customers, the combination would convey a number of measurable benefits. Pre-configured environments ship sooner time-to-productivity and scale back troubleshooting, whereas validated compatibility throughout {hardware} and networking layers simplifies scaling. In response to CIQ, these optimizations not solely speed up deployment but additionally decrease complete value of possession by decreasing configuration errors, safety dangers, and downtime. Totally signed drivers and safe boot assist additionally handle one of the vital persistent safety challenges in GPU infrastructure: sustaining compliance in tightly managed IT environments.
The corporate emphasizes that this growth is just not restricted to analysis or AI startups. By integrating NVIDIA’s CUDA and DOCA frameworks instantly into Rocky Linux, CIQ could allow broad adoption of GPU-accelerated computing throughout finance, manufacturing, telecommunications, and cloud service suppliers. Enterprise prospects deploying NVIDIA GPUs and ConnectX networking options can now depend on a totally licensed Linux base picture designed for high-performance clusters and multi-tenant environments.
This announcement comes as demand for AI-ready infrastructure continues to outpace {hardware} availability, with enterprises in search of methods to scale back the software program friction concerned in scaling massive GPU clusters. Analysts observe that the inclusion of NVIDIA’s networking stack inside a Linux distribution is a major technical and strategic milestone, probably altering how organizations construct out their AI and HPC environments.
CIQ plans to showcase the improved Rocky Linux AI platform at KubeCon + CloudNativeCon North America (November 10–13, 2025) and SC25 (November 16–21, 2025), alongside companions demonstrating validated reference architectures constructed with NVIDIA AI infrastructure, ConnectX SuperNICs, and BlueField DPUs.
