Sunday, 14 Dec 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Design > HPE Introduces ‘Turnkey’ AI Data Center Solution With Nvidia
Design

HPE Introduces ‘Turnkey’ AI Data Center Solution With Nvidia

Last updated: June 18, 2024 7:03 pm
Published June 18, 2024
Share
HPE Introduces ‘Turnkey’ AI Data Center Solution With Nvidia
SHARE

Hewlett Packard Enterprise (HPE) has partnered with Nvidia to construct what the corporate describes as a “turnkey” AI personal cloud answer that provides enterprises every thing they should shortly and simply deploy generative AI purposes

HPE Non-public Cloud AI integrates Nvidia’s GPUs, networking, and AI Enterprise software program platform with HPE’s servers and storage, all managed via a centralized administration layer within the HPE GreenLake cloud, HPE executives mentioned as we speak (June 18) on the HPE Discover 2024 convention in Las Vegas.

“Non-public cloud AI is able to run out of the field,” mentioned HPE CTO Fidelma Russo in a current media briefing. “You plug it in. You join it to the GreenLake cloud, and three clicks later… your knowledge science and IT operations groups are up and working the Nvidia software program stack.”

HPE additionally unveiled new AI-optimized servers that includes Nvidia’s newest GPUs and Superchips and help for Nvidia {hardware} and AI software program in its OpsRamp cloud-based IT operations administration device. A brand new conversational assistant in OpsRamp permits IT groups to extra simply monitor and handle their AI infrastructure, Russo mentioned.  

{Hardware} Distributors Clamoring to Associate with Nvidia

With as we speak’s bulletins, HPE has strengthened the AI choices inside its GreenLake platform, the corporate’s on-premises options which might be supplied via a cloudlike, subscription-based mannequin.

Associated:HPE Desires to Construct and Handle Your Non-public Cloud

The corporate has change into the newest {hardware} vendor to unveil new knowledge heart options designed for AI workloads. Cisco final week introduced plans for its personal all-in-one AI knowledge heart answer in collaboration with Nvidia known as Cisco Nexus HyperFabric AI clusters. Dell and Lenovo have additionally beforehand introduced Nvidia-powered knowledge heart methods.

{Hardware} distributors are collaborating with Nvidia on AI knowledge heart options resulting from robust buyer demand, mentioned Peter Rutten, analysis vice chairman inside IDC’s worldwide infrastructure analysis group.

Nvidia dominates the AI market with its GPUs and software program ecosystem, which incorporates Nvidia AI Enterprise, a software program suite of AI instruments, frameworks, and pre-trained fashions that make it simpler for enterprises to develop and deploy AI workloads.

See also  Observability Takes Center Stage as IT Braces for Technological Shifts

“Everyone is creating options with Nvidia. They don’t have any selection,” Rutten informed DCN. “A number of distributors develop options with different GPU and accelerator distributors, however they are saying: ‘If we go to a buyer, and don’t put Nvidia in entrance of them, they may stroll away.’ There’s a notion amongst finish customers out there that AI equals Nvidia.”

Scorching Marketplace for AI Knowledge Middle {Hardware}

{Hardware} distributors are racing to compete within the fast-growing marketplace for AI-optimized knowledge heart {hardware} as enterprises pursue their very own generative AI initiatives to enhance enterprise workflows and operations, improve customer support, and enhance employee productiveness. Enterprises want AI-optimized {hardware} due to AI’s compute-intensive necessities.

AI represents the following evolution of IT infrastructure as a result of AI has totally different system, knowledge, and privateness necessities than current workloads, mentioned Melanie Posey, analysis director of cloud and managed companies transformation at S&P International Market Intelligence.

“There’s lots of alternative for everyone on the infrastructure aspect of this as a result of organizations don’t essentially have the infrastructure of their knowledge facilities proper now which might be going to help AI use circumstances,” she mentioned.

Whereas enterprises can use the general public cloud for AI, lots of enterprises will wish to deploy generative AI on-premises for lots of the identical causes they nonetheless have on-premises infrastructure: they’ve lots of proprietary or delicate knowledge saved in-house, have knowledge privateness or regulatory compliance issues and are apprehensive about the price of working generative AI purposes within the public cloud, Posey mentioned.

“All the explanations they nonetheless have on-premises infrastructure are magnified if you begin speaking about AI,” she mentioned.

Nvidia Computing by HPE

HPE Non-public Cloud AI is the important thing providing in a brand new portfolio of merchandise that HPE has co-developed with Nvidia, which the businesses name ‘Nvidia AI Computing by HPE.’

Non-public Cloud AI, which will likely be out there this fall as a completely managed or self-managed answer, is a completely built-in infrastructure stack that features HPE ProLiant Servers, HPE GreenLake for File Storage, and Nvidia Spectrum-X Ethernet networking, the corporate mentioned.

See also  AICRAFT's edge data processor to enhance Korean space project

The answer is designed for AI inferencing and retrieval augmented era (RAG), which permits enterprises to make use of their very own proprietary knowledge for generative AI purposes. It should additionally allow organizations to fine-tune the coaching of enormous language fashions, mentioned Russo, who additionally serves as vice chairman and basic supervisor of HPE’s hybrid cloud enterprise unit.

HPE Non-public Cloud AI will are available in 4 configurations to help AI workloads of all sizes, from small Nvidia L40S GPU methods to bigger methods working H100 NVL Tensor Core GPUs and Nvidia GH200 NVL2 Grace Hopper Superchips.

“Each is modular and lets you broaden or add capability over time and keep a constant cloud-managed expertise with the HPE GreenLake cloud,” Russo mentioned.

On the software program entrance, HPE Non-public Cloud AI will characteristic Nvidia AI Enterprise, which incorporates NIM microservices, a characteristic that simplifies deployment of generative AI. It additionally options HPE AI Necessities, a curated set of AI and knowledge instruments, and an embedded knowledge lakehouse that may allow enterprises to unify and simply entry structured and unstructured knowledge shops on-premises or within the public cloud, Russo mentioned.

The GreenLake cloud gives a non-public cloud management aircraft, a centralized administration layer that permits enterprises to arrange and handle their personal cloud setting. It provides dashboards for monitoring and observability, and permits organizations to provision and handle their workloads, endpoints, and knowledge throughout hybrid environments, the corporate mentioned.

Can HPE Succeed with Non-public Cloud AI? Analysts Weigh In

Different {hardware} distributors have additionally developed turnkey AI knowledge heart options, mentioned IDC’s Rutten. HPE’s new Non-public Cloud AI will likely be aggressive out there and be enticing to enterprises – notably these with refined, superior customers able to deploy generative AI purposes –  as a result of it’s a complete answer, he mentioned.

The answer contains safety, sustainability notifications, consumption analytics, account consumer administration, asset administration, a wellness dashboard, AIOps and even HPE’s personal virtualization functionality, he mentioned.

“I do suppose the market is prepared for somebody to mix all these totally different points of AI improvement and deployment into one platform – and that may assist them with promoting this,” Rutten mentioned of HPE.

See also  Accelsius Secures $24M for Its AI Data Center Cooling Solutions

Andy Thurai, vice chairman and principal analyst at Constellation Analysis, mentioned most enterprises as we speak are nonetheless experimenting with AI, so preliminary traction for HPE Non-public Cloud AI is probably not nice. However when enterprises mature their AI purposes and are searching for an optimized, cost-efficient knowledge heart answer that provides the very best whole price of possession for his or her AI workloads, HPE might do effectively out there, he mentioned.

“HPE has good potential to reach that area when the time comes,” Thurai informed DCN.

HPE’s New AI-Optimized Servers

HPE as we speak additionally introduced three new AI-optimized servers:

  • HPE ProLiant Compute DL384 Gen12, which is able to characteristic the Nvidia GH200 NVL2 superchip. It’s focused for memory-intensive AI workloads, corresponding to fine-tuning LLMs or deploying RAG.

  • HPE ProLiant Compute DL380a Gen 12, that includes as much as eight Nvidia H200 NVL GPUs. It’s designed for LLM customers that want the flexibleness to scale generative AI workloads.

  • HPE Cray XD670, which is able to characteristic eight Nvidia H200 Tensor Core GPUs. It’s focused at LLM builders and AI service suppliers that want excessive efficiency for giant AI mannequin coaching and tuning.

The Cray system will likely be out there this summer time, whereas the 2 ProLiant methods will likely be out there this fall. HPE may even help the Nvidia GB200 Grace Blackwell Superchip and Nvidia Blackwell GPUs sooner or later. Choose fashions will characteristic direct liquid cooling, the corporate mentioned.

HPE additionally introduced that HPE GreenLake for File Storage has achieved Nvidia DGX BasePOD certification and Nvidia OVX storage validation, offering enterprises with the file storage answer they want for generative AI and GPU-intensive workloads, the corporate mentioned.

Source link

Contents
{Hardware} Distributors Clamoring to Associate with NvidiaScorching Marketplace for AI Knowledge Middle {Hardware}Nvidia Computing by HPECan HPE Succeed with Non-public Cloud AI? Analysts Weigh InHPE’s New AI-Optimized Servers
TAGGED: Center, data, HPE, introduces, Nvidia, solution, Turnkey
Share This Article
Twitter Email Copy Link Print
Previous Article CBRE Buys Direct Line Global Amid Data Center Explosion CBRE Buys Direct Line Global Amid Data Center Explosion
Next Article STACK Infrastructure launches pioneering university scholarship program for female students STACK Infrastructure launches pioneering university scholarship program for female students
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Madrona Closes Fund X and Acceleration Fund IV, Raising $770M

Madrona, a Seattle, WA-based enterprise capital agency, closed Fund X and Acceleration Fund IV, elevating…

January 25, 2025

RSAC 2025: Why the AI agent era means more demand for CISOS

Be a part of our each day and weekly newsletters for the most recent updates…

May 3, 2025

Parafin Raises $100M in Series C Funding

Parafin, a San Francisco, CA-based embedded finance infrastructure firm, raised $100M in Collection C funding,…

December 18, 2024

Addressing the risks of artificial intelligence to human rights

Mario Hernández Ramos, Chair of the Committee on Synthetic Intelligence of the Council of Europe,…

February 26, 2025

Chronosphere takes on Datadog with AI that explains itself, not just outages

Chronosphere, a New York-based observability startup valued at $1.6 billion, introduced Monday it's going to…

November 10, 2025

You Might Also Like

shutterstock 2291065933 space satellite in orbit above the Earth white clouds and blue sea below
Global Market

Aetherflux joins the race to launch orbital data centers by 2027

By saad
Why data centre megadeals must prove their value
Global Market

Why data centre megadeals must prove their value

By saad
atNorth's Iceland data centre epitomises circular economy
Cloud Computing

atNorth’s Iceland data centre epitomises circular economy

By saad
How to build true resilience into a data centre network
Global Market

How to build true resilience into a data centre network

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.