Saturday, 7 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Global Market > HPE Expands Cray Supercomputing Lineup for Next-Gen AI Workloads
Global Market

HPE Expands Cray Supercomputing Lineup for Next-Gen AI Workloads

Last updated: November 15, 2025 12:08 pm
Published November 15, 2025
Share
HPE Expands Cray Supercomputing Lineup for Next-Gen AI Workloads
SHARE

Hewlett Packard Enterprise is increasing its HPE Cray supercomputing lineup with new blades, storage, interconnect and administration software program designed to deal with the rising computational and vitality calls for of large-scale AI and conventional high-performance computing (HPC).

HPE is positioning the up to date platform as a unified structure for analysis labs, sovereign computing initiatives and enormous enterprises that more and more need AI and simulation workloads to coexist on the identical infrastructure fairly than in separate silos.

The newest additions construct on final month’s introduction of the HPE Cray Supercomputing GX5000 platform and the K3000 storage system. Collectively, the {hardware} and software program are supposed to ship increased compute density, higher vitality effectivity and extra operational management as AI fashions develop in dimension and complexity. HPE argues that organizations are now not wanting only for peak efficiency on benchmark workloads; they’re additionally beneath strain to handle energy consumption, combine AI with current simulation workflows, and preserve infrastructure safe in multi-tenant environments.

European analysis facilities are among the many first to decide to the up to date platform. The Excessive-Efficiency Computing Middle Stuttgart (HLRS) and the Leibniz Supercomputing Centre (LRZ) have each chosen the HPE Cray GX5000 as the idea for his or her subsequent flagship methods, named Herder and Blue Lion, respectively.

HLRS expects a big leap in efficiency for each simulation and AI workloads whereas additionally lowering the vitality footprint of its information middle.

LRZ is emphasizing sustainability as a lot as uncooked efficiency: its Blue Lion system will use one hundred pc direct liquid cooling and is being designed to run with cooling water temperatures as much as 40°C, permitting waste warmth to be reused throughout the Garching analysis campus. In accordance with LRZ, the brand new system is projected to ship sustained efficiency as much as 30 instances quicker than its present supercomputer, whereas enabling tighter integration of modeling, simulation and AI.

Straight Liquid-Cooled Compute Blades

On the coronary heart of the portfolio enlargement are three new instantly liquid-cooled compute blades that assist completely different mixtures of CPUs and GPUs from a number of distributors. Every blade sort is meant to handle a selected set of workloads, whereas nonetheless becoming into the identical chassis and administration framework so clients can combine and match primarily based on their wants.

The HPE Cray Supercomputing GX440n Accelerated Blade targets organizations standardizing on NVIDIA platforms for mixed-precision AI and HPC. Every blade combines 4 NVIDIA Vera CPUs with eight NVIDIA Rubin GPUs and exposes both 4 or eight 400 Gbps HPE Slingshot endpoints, plus non-compulsory native NVMe solid-state storage. As much as 24 of those blades could be put in in a single GX5000 compute rack, yielding as many as 192 Rubin GPUs per rack for dense accelerator configurations.

See also  Rolls-Royce expands energy portfolio with cutting-edge gas engines

For patrons that favor AMD’s ecosystem, HPE is introducing the GX350a Accelerated Blade. It pairs a next-generation AMD EPYC processor, codenamed ‘Venice,’ with 4 AMD Intuition MI430X GPUs from AMD’s MI400 collection. This blade is positioned as a ‘common’ engine for AI and HPC, notably for organizations centered on sovereign AI methods that emphasize information locality and management. A rack can host as much as 28 GX350a blades, offering as much as 112 MI430X GPUs per rack.

The GX250 Compute Blade addresses CPU-only workloads that also demand excessive double-precision efficiency, similar to large-scale simulations and conventional numerical modeling. Every blade carries eight next-generation AMD EPYC ‘Venice’ CPUs, delivering very excessive x86 core density in a single rack. In giant methods, clients can mix a CPU-only partition constructed from GX250 blades with a number of GPU-accelerated partitions primarily based on both the NVIDIA- or AMD-based blades, relying on their utility combine and vendor technique. As much as 40 GX250 blades can slot in a compute rack, maximizing core rely per footprint.

All three blades depend on one hundred pc direct liquid cooling, a design alternative that displays the broader development towards liquid-cooled information facilities as energy densities climb. By eradicating warmth extra effectively on the part degree, direct liquid cooling can cut back the necessity for conventional air-cooling infrastructure, enhance rack density and allow increased sustained efficiency at a given energy envelope.

To handle these more and more complicated methods, HPE is rolling out new Supercomputing Administration Software program alongside the {hardware}. The platform is constructed to assist multi-tenant environments, virtualization and containerization, permitting operators to host a number of consumer communities and workload varieties on the identical infrastructure whereas imposing isolation the place required. Administration capabilities span the whole lifecycle of the system, from preliminary provisioning to day-to-day monitoring, energy and cooling management, and capability expansions.

Interconnect Efficiency, Power Consciousness

A key focus of the software program is vitality consciousness. Operators can monitor energy consumption throughout the system, estimate utilization over time and combine with power-aware schedulers. That functionality is essential as each private and non-private operators face stricter vitality budgets and sustainability mandates. HPE can also be including enhanced safety controls and governance reporting to align with the necessities of sovereign computing initiatives and controlled industries.

Interconnect efficiency stays a essential consider supercomputing architectures, particularly as AI workloads develop into extra communication-intensive. HPE is bringing its Slingshot 400 interconnect to GX5000-based methods in early 2027. Slingshot 400 is optimized for dense, liquid-cooled type components and enormous, converged AI/HPC installations. The newest swap blade delivers 64 ports at 400 Gbps every and could be deployed in a number of configurations: eight switches for 512 ports, 16 switches for 1,024 ports or 32 switches for two,048 ports. HPE says the topology is designed to use all accessible bandwidth within the GX5000 structure and cut back latency whereas sustaining price management. Slingshot 400 was initially introduced for earlier Cray methods; this rollout adapts it to the denser and extra AI-centric GX5000 platform.

See also  Netscout boosts network observability with Wi-Fi 7 monitoring, certificate lifecycle tracking

Storage is one other pillar of the up to date portfolio. The HPE Cray Supercomputing Storage Techniques K3000 relies on HPE ProLiant DL360 Gen12 servers and integrates the open supply Distributed Asynchronous Object Storage (DAOS) stack instantly from the manufacturing facility. DAOS is designed for low latency and excessive throughput, notably for workloads the place enter/output efficiency is as essential as compute energy, similar to AI coaching pipelines and data-intensive simulations.

HPE is providing a number of DAOS server configurations, optimized both for efficiency or capability. Efficiency-focused methods could be configured with 8, 12 or 16 NVMe drives, whereas capacity-optimized variations scale to twenty drives. Drive sizes vary from 3.84 TB to fifteen.36 TB, and DRAM configurations span from 512 GB as much as 2 TB per node relying on the chosen profile. Connectivity choices embrace HPE Slingshot 200 or 400, InfiniBand NDR and 400 Gbps Ethernet, enabling integration into quite a lot of material designs.

Portfolio of HPE Providers across the Cray Line

HPE emphasizes that {hardware} is simply a part of the supercomputing worth proposition. The corporate continues to supply a portfolio of companies across the Cray line, together with utility efficiency tuning, turnkey deployment and 24×7 operational assist. For patrons that won’t have deep in-house HPC employees, these companies are pitched as a technique to shorten time-to-science or time-to-insight and preserve sustained efficiency as software program stacks and workloads evolve.

The corporate’s companions are leveraging the announcement to underline broader business tendencies. AMD highlights joint work with HPE on the convergence of HPC and sovereign AI, arguing that tightly built-in EPYC CPUs and Intuition GPUs can ship scalable, energy-efficient methods for each scientific and AI workloads. NVIDIA stresses that the mixture of its Vera Rubin platform with HPE’s next-generation supercomputers is aimed toward accelerating simulation, analytics and AI in what it describes because the “AI industrial revolution.” Market analysis agency Hyperion Analysis frames the GX5000 line as half of a bigger wave through which high-performance computing and AI are among the many fastest-growing segments of the IT market, with direct implications for product design, scientific analysis and broader societal challenges.

Availability is staggered throughout the portfolio. The brand new GX440n, GX350a and GX250 blades, together with the Supercomputing Administration Software program and Slingshot 400 for GX5000 clusters, are deliberate for early 2027. The K3000 DAOS-based storage methods with ProLiant compute nodes are scheduled to reach earlier, in 2026. That roadmap means that HPE is aligning its supercomputing platform with the anticipated subsequent cycle of huge AI and exascale-class procurements, giving early adopters time to plan architectures that mix simulation, information analytics and AI coaching at scale.

See also  5 Wi-Fi vulnerabilities you need to know about

For B2B know-how consumers and designers, the up to date HPE Cray portfolio illustrates how supercomputing design is evolving in response to AI: denser blades, extra aggressive liquid cooling, extra versatile administration of multi-tenant and containerized environments, and storage methods optimized for excessive I/O. Somewhat than treating AI clusters as separate, special-purpose infrastructure, the route is towards converged methods that may run AI, simulation and information workflows facet by facet – sharing the identical racks, materials and operational mannequin.

Government Insights FAQ: Supercomputing and Blade Architectures

Why are blade architectures so prevalent in fashionable supercomputers?

Blade designs make it simpler to pack excessive core and accelerator counts right into a constrained footprint whereas standardizing energy, cooling and networking. This modularity simplifies scaling: operators can add or exchange blades with out redesigning the whole system, which is essential as AI and simulation workloads develop and {hardware} generations change extra shortly.

How do blades assist stability AI and conventional HPC workloads?

Blades could be specialised for various roles – GPU-heavy for AI coaching, CPU-dense for double-precision simulation, or I/O-optimized for information duties – but nonetheless function inside one chassis and administration area. That enables IT groups to construct heterogeneous partitions tailor-made to every workload sort, whereas presenting a unified system to schedulers and customers.

What’s the affect of blade density on energy and cooling technique?

Larger blade density drives up rack-level energy consumption and thermal output, which is why many next-generation methods pair dense blades with direct liquid cooling. This strategy can take away warmth extra effectively than air, enabling sustained efficiency and better rack utilization with out breaching energy or temperature limits.

How does utilizing blades have an effect on lifecycle administration and upgrades?

As a result of compute, acceleration and generally even storage are modularized on the blade degree, operators can part in new CPU or GPU generations incrementally. This reduces downtime and extends the helpful lifetime of the general system chassis, interconnect and facility infrastructure whereas nonetheless permitting efficiency upgrades over time.

Are blade-based supercomputers suitable with rising disaggregated architectures?

Sure, blades can function the constructing blocks in additional disaggregated designs the place reminiscence, accelerators or storage are pooled and accessed over high-speed materials. As community applied sciences enhance, blade enclosures can evolve from tightly coupled nodes into versatile endpoints in bigger useful resource swimming pools, giving operators extra freedom to compose methods dynamically primarily based on workload wants.

Source link

Contents
Straight Liquid-Cooled Compute BladesInterconnect Efficiency, Power ConsciousnessPortfolio of HPE Providers across the Cray Line
TAGGED: Cray, expands, HPE, lineup, nextgen, Supercomputing, Workloads
Share This Article
Twitter Email Copy Link Print
Previous Article Google’s new AI training method helps small models tackle complex reasoning Google’s new AI training method helps small models tackle complex reasoning
Next Article OpenAI experiment finds that sparse models could give AI builders the tools to debug neural networks OpenAI experiment finds that sparse models could give AI builders the tools to debug neural networks
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Ruckus makes some noise with preconfigured switches for AV-over-IP networks

Because it seems, that very same ICX core platform household that was initially developed by…

February 7, 2026

Microsoft Dropped Some AI Data Center Leases – Report

(Bloomberg) -- Microsoft has canceled some leases for US information middle capability, based on TD…

February 24, 2025

Ramp Announces $150M Secondary Investment; Increases Valuation to $13 Billion

Ramp, a NYC based mostly monetary operations platform, introduced that new and current traders together…

March 3, 2025

What is a network switch and how does it work?

To scale back the prospect for collisions between community visitors going to and from a…

April 12, 2024

Supermicro Ramps Up Rack-Scale Production of NVIDIA Blackwell HGX B200

Supermicro has introduced that its end-to-end AI knowledge middle Constructing Block Options, that are accelerated…

February 6, 2025

You Might Also Like

A person watching a stream of videos on a tablet
Global Market

Ruckus makes some noise with preconfigured switches for AV-over-IP networks

By saad
SpaceX
Global Market

Musk’s million data centers in space won’t fly, say experts

By saad
Is your Java estate audit-ready – or just hoping for the best?
Global Market

Is your Java estate audit-ready – or just hoping for the best?

By saad
View on cooling towers of nuclear power plant thermal power station in which heat source is nuclear reactor, France, Europe, cheap energy source
Global Market

What hyperscalers’ hyper-spending on data centers tells us

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.