Tuesday, 10 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Cloud Computing > Make the most of GPUs for machine learning applications
Cloud Computing

Make the most of GPUs for machine learning applications

Last updated: March 13, 2024 2:50 am
Published March 13, 2024
Share
ai artificial intelligence circuit board circuitry mother board nodes computer chips
SHARE

Whereas graphics processing models (GPUs) as soon as resided completely within the domains of graphic-intensive video games and video streaming, GPUs are actually equally related to and machine studying (ML). Their capability to carry out a number of, simultaneous computations that distribute duties—considerably rushing up ML workload processing—makes GPUs preferrred for powering synthetic intelligence (AI) purposes. 

The only instruction a number of information (SIMD) stream structure in a GPU permits information scientists to interrupt down advanced duties into a number of small models. As such, enterprises pursuing AI and ML initiatives are actually extra probably to decide on GPUs as a substitute of central processing models (CPUs) to quickly analyze massive information units in algorithmically advanced and hardware-intensive machine studying workloads. That is very true for big language fashions (LLMs) and the generative AI purposes constructed on LLMs.

Nonetheless, lower-cost CPUs are greater than able to working sure machine studying duties the place parallel processing is pointless. These embody algorithms that carry out statistical computations, equivalent to pure language processing (NLP), and a few deep studying algorithms. There are additionally examples of AI which can be applicable for CPUs, equivalent to telemetry and community routing, object recognition in CCTV cameras, fault detection in manufacturing, and object detection in CT and MRI scans.

Enabling GPU-based app improvement

Whereas the above CPU use circumstances proceed to ship advantages to companies, the large push in generative AI calls for extra GPUs. This has been a boon to GPU producers throughout the board, and particularly Nvidia, the undisputed chief within the class. And but, as demand grows for GPUs world wide, extra enterprises are realizing that configuring GPU stacks and growing on GPUs is just not simple. 

See also  Self-evolving edge AI enables real-time learning and forecasting in small devices

To beat these challenges, Nvidia and different organizations have launched completely different device units and frameworks to make it simpler for builders to handle ML workloads and write high-performance code. These embody GPU-optimized deep studying frameworks equivalent to PyTorch and TensorFlow in addition to Nvidia’s CUDA framework. It’s not an overstatement to say that the CUDA framework has been a game-changer in accelerating GPU duties for researchers and information scientists.

On-premises GPUs vs. cloud GPUs

On condition that GPUs are preferable to CPUs for working many machine studying workloads, it’s vital to know what deployment method—on-premises or cloud-based—is best suited for the AI and ML initiatives a given enterprise undertakes. 

In an on-premises GPU deployment, a enterprise should buy and configure their very own GPUs. This requires a big capital funding to cowl each the price of the GPUs and constructing a devoted information middle, in addition to the operational expense of sustaining each. These companies do take pleasure in a bonus of possession: Their builders are free to iterate and experiment endlessly with out incurring further utilization prices, which might not be the case with a cloud-based GPU deployment. 

Cloud-based GPUs, alternatively, supply a pay-as-you-go paradigm that allows organizations to scale their GPU dissipate or down at a second’s discover. Cloud GPU suppliers supply devoted assist groups to deal with all duties associated to GPU cloud infrastructure. On this approach, the cloud GPU supplier permits customers to shortly get began by provisioning providers, which saves time and cuts down on liabilities. It additionally ensures that builders have entry to the most recent know-how and the proper GPUs for his or her present ML use circumstances. 

See also  Kyndryl signs three-year deal to manage Bord Gáis Energy’s entire hyperscaler and private cloud environment 

Companies can achieve one of the best of each worlds by way of a hybrid GPU deployment. On this method, builders can use their on-prem GPUs to check and prepare fashions, and commit their cloud-based GPUs to scale providers and supply better resilience. Hybrid deployments permit enterprises to stability their expenditures between CapEx and OpEx whereas guaranteeing that GPU assets can be found within the neighborhood of the enterprise’s information middle operations. 

Optimizing for machine studying workloads 

Working with GPUs is difficult, each from the configuration and app improvement standpoints. Enterprises that go for on-prem deployments usually expertise productiveness losses as their builders should carry out repetitive procedures to organize an acceptable surroundings for his or her operations.

To organize the GPU for performing any duties, one should full the next actions:

  • Set up and configure the CUDA drivers and CUDA toolkit to work together with the GPU and carry out any further GPU operations.
  • Set up the required CUDA libraries to maximise the GPU effectivity and use the computational assets of the GPU.
  • Set up deep studying frameworks equivalent to TensorFlow and PyTorch to carry out machine studying workloads like coaching, inference, and fine-tuning.
  • Set up instruments like JupyterLab to run and check code and Docker to run containerized GPU purposes.

This prolonged technique of getting ready GPUs and configuring the specified environments often overwhelms builders and may additionally end in errors resulting from unmatched or outdated variations of required instruments. 

When enterprises present their builders with turnkey, pre-configured infrastructure and a cloud-based GPU stack, builders can keep away from performing burdensome administrative duties and procedures equivalent to downloading instruments. Finally, this permits builders to deal with high-value work and maximize their productiveness, as they’ll instantly begin constructing and testing options. 

See also  Blackstone Is Said to Mull Sale of Two AirTrunk Data Centers

A cloud GPU technique additionally offers companies with the pliability to deploy the proper GPU for any use case. This allows them to match GPU utilization to their enterprise wants, at the same time as these wants change, boosting productiveness and effectivity, with out being locked into a particular GPU buy. 

Furthermore, given how quickly GPUs are evolving, partnering with a cloud GPU supplier presents GPU capability wherever the group wants it, and the cloud supplier will preserve and improve their GPUs to make sure prospects at all times have entry to GPUs that provide peak efficiency. A cloud or hybrid deployment paradigm will allow information science groups to deal with revenue-generating actions as a substitute of provisioning and sustaining GPUs and associated infrastructure, in addition to keep away from investing in {hardware} that might quickly turn into outdated. 

Kevin Cochrane is chief advertising officer at Vultr.

—

Generative AI Insights offers a venue for know-how leaders—together with distributors and different outdoors contributors—to discover and talk about the challenges and alternatives of generative synthetic intelligence. The choice is wide-ranging, from know-how deep dives to case research to knowledgeable opinion, but additionally subjective, primarily based on our judgment of which matters and coverings will finest serve InfoWorld’s technically refined viewers. InfoWorld doesn’t settle for advertising collateral for publication and reserves the proper to edit all contributed content material. Contact doug_dineley@foundryco.com.

Copyright © 2024 IDG Communications, .

Contents
Enabling GPU-based app improvementOn-premises GPUs vs. cloud GPUsOptimizing for machine studying workloads 

Source link

TAGGED: applications, GPUs, Learning, Machine
Share This Article
Twitter Email Copy Link Print
Previous Article Network connectivity in the age of AI Network connectivity in the age of AI
Next Article DeFi Platform Algotech Raises $250,000 in a Single Day to Cross $2M Presale Milestone DeFi Platform Algotech Raises $250,000 in a Single Day to Cross $2M Presale Milestone
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Oracle’s cloud strategy an increasingly risky bet

Nevertheless, he identified, “theatre is just not supply. What Oracle served was much less a…

October 30, 2025

US slams brakes on AI Diffusion Rule, hardens chip export curbs

The Department of Commerce (DOC) has slammed the brakes on the sweeping “AI Diffusion Rule,”…

May 14, 2025

UAE set to help fund OpenAI’s in-house chips

OpenAI’s bold plans to develop its personal semiconductor chips for powering superior AI fashions might…

March 15, 2024

Datacloud Global Congress set for Cannes in June 2024

The Datacloud World Congress 2024 will happen in Cannes on 5-6 June, that includes audio…

May 15, 2024

Q.ANT secures €62 Million to revolutionize AI Processing with photonics

In a major improvement for the synthetic intelligence (AI) and high-performance computing (HPC) sectors, Q.ANT…

July 22, 2025

You Might Also Like

Shutterstock Germany Only - News - Intel Factory Germany September 2024
Global Market

Intel sets sights on data center GPUs amid AI-driven infrastructure shifts

By saad
DiDAX: Innovating DNA-based data applications
Innovations

DiDAX: Innovating DNA-based data applications

By saad
Alphabet boosts cloud investment to meet rising AI demand
Cloud Computing

Alphabet boosts cloud investment to meet rising AI demand

By saad
On how to get a secure GenAI rollout right
Cloud Computing

On how to get a secure GenAI rollout right

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.