Sunday, 8 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Colocation > What the OWASP LLM Top 10 Gets Right – and What It Misses
Colocation

What the OWASP LLM Top 10 Gets Right – and What It Misses

Last updated: November 26, 2024 1:16 pm
Published November 26, 2024
Share
What the OWASP LLM Top 10 Gets Right – and What It Misses
SHARE

Securing AI programs is a urgent concern for CIOs and CISOs resulting from AI and LLMs’ more and more important function in companies. Thus, they instinctively flip to Open Net Utility Safety Challenge (OWASP) for steering.

OWASP is thought for its Prime 10 checklist of internet utility safety flaws. During the last years, the group has expanded its focus and these days publish a bouquet of ‘Prime 10’ lists for numerous safety subjects, together with one for big language fashions (LLMs). However what does this checklist cowl? Is the menace steering complete?

Earlier than deep-diving into the OWASP LLM Prime 10 checklist, a change of perspective could be an eye-opener for safety professionals. Suppose you’re a cybercriminal: why would you assault an LLM?

The Attacker Mindset

Malicious hacking will not be an instructional endeavor. It’s a enterprise. Cybercriminals assault not what’s theoretically potential however which guarantees a fast monetary return. So, what’s the enterprise case for manipulating AI fashions and LLMs to unfold misinformation? Normally, different assaults are financially extra rewarding, equivalent to:

  • Cryptomining: Misusing the computing energy of compromised AI estates to mine cryptocurrencies – tremendous handy to money in.

  • Blackmail with delicate knowledge after stealing, e.g., affected person particulars, buyer data, or enterprise secrets and techniques, and demanding a ransom for not leaking it.

  • Distributed Denial-of-Service (DDoS) Assaults, i.e., bombarding business-critical programs with requests to deliver them down, typically for demanding ransom or throughout a political disinformation marketing campaign.

Associated:How Insecure Community Gadgets Can Expose Knowledge Facilities to Assault

 Extra superior assault types requiring extra effort, know-how, and assets are:

  • Credential Theft: Stealing credentials to maneuver by means of a company’s programs (lateral motion) to realize entry to extra worthwhile knowledge. When credentials relate to SaaS providers equivalent to ChatGPT, reselling them within the Darknet can also be an choice.

  • Triggering Financially Helpful Actions: Manipulating AI programs to carry out unauthorized actions like monetary transactions – clearly a fairly refined, high-effort assault.

OWASP LLM Prime 10: AI Safety Dangers

When trying on the OWASP LLM Top 10, 5 out of the ten dangers relate to manipulating or attacking the AI mannequin itself:

  • Immediate Injection (LLM01): Hackers manipulate AI programs by submitting requests, aka prompts, to the LLM in order that it behaves exterior its supposed use and generates dangerous or inappropriate outputs.

  • Coaching Knowledge Poisoning (LLM-03): Malicious actors corrupt coaching knowledge, decreasing the standard of AI fashions. The chance is related for publicly obtainable group coaching knowledge, not a lot for inner knowledge. The latter is just like pre-AI fraud or sabotage dangers for database

  • Mannequin Denial-of-Service (LLM04): Overload AI parts with requests to impression their stability and availability, affecting enterprise purposes that depend on them.

  • Delicate Data Disclosure (LLM-07): Exploiting LLMs to launch confidential knowledge resulting from unscrubbed enter knowledge ending in an LLM containing delicate data or lacking filtering of undesirable requests. LLMs miss stringent fine-granular entry management recognized from databases and file programs.

  • Mannequin Theft (LLM10): Hackers would possibly probe programs to know how they perform, which may result in mental property theft.

  • Overreliance on AI (LLM-09): Blind belief in AI outputs can result in improper choices, e.g., when LLMs “hallucinate” and fabricate data. It’s a pure enterprise danger, not associated to IT.

Associated:Evolving Ransomware Threats: Why Offline Storage is Important for Fashionable Knowledge Safety

See also  Data Center Providers Continue Emerging Market Expansion | DCN

All these dangers listed within the LLM Prime 10 exist, although attackers would possibly wrestle to monetize profitable assaults in lots of eventualities. Organizations can mitigate such danger solely on a per-application or per-model stage, e.g., by pen-testing them periodically.

Architectural Layers and OWASP LLM Prime 10 Dangers

LLM Interplay Challenges

Enterprise advantages include a good integration of AI and LLMs into enterprise processes. The technical coupling of LLMs and different programs introduces technical safety dangers past the launched model-related points. These dangers depend for 4 of the LLM Prime 10:

Associated:Managing Danger: Is Your Knowledge Middle Insurance coverage as much as the Check?

  • Insecure output dealing with (LLM-02) warns towards feeding the output on to different programs with out cleansing the output towards, e.g., hidden assaults and malicious actions.

  • Extreme company (LLM-08) pertains to LLMs having extra entry rights than vital, e.g., to entry and ship emails, enabling profitable attackers to set off undesired actions in different programs (e.g., deletion of emails).

  • Permission points (LLM-06) relate to unclear authentication and authorization checks. The LLMs or their plugins would possibly make assumptions concerning customers and roles that aren’t assured by different parts.

  • Insecure plugin design (LLM-10) factors out the chance when APIs don’t depend on concrete, type-checked parameters however settle for free textual content, which could end in malicious habits when processing the request.

All these dangers relate to API hygiene and lacking security-by-design, which bigger organizations would possibly deal with with penetration testing and safety assurance measures.

Whereas exploitation requires excessive investments right now, this could change when LLM providers develop in the direction of ecosystems with widespread Third-party plugins.

See also  How to handle the top challenges of SaaS management

Immediately, cybercriminals may see the prospect for mass assaults on vulnerabilities of widespread plugins or exploiting frequent misconfigurations. Skilled vulnerability administration would even be a should within the LLM context.

AI Tooling Dangers

Whereas the general public focuses on LLM assaults, the AI infrastructure for coaching and operating them would possibly current a extra important danger, even when corporations depend on SaaS or broadly used AI frameworks.

Points with two (open supply) AI frameworks, the ShadowRay vulnerability (CVE-2023-48022) and ‘Probllama’ (CVE-2024-37032), are latest examples.

Probllama impacts Ollama, a platform for deploying and operating LLMs, the place poor enter validation permits attackers to overwrite recordsdata, probably resulting in distant code execution.

hadowRay permits attackers to submit duties with out authentication – an open invitation for exploitation. Certainly, community zoning and firewalls assist, although (by some means horrifying) they don’t seem to be at all times in place. So, these two examples illustrate how rapidly AI tooling and framework vulnerabilities grow to be invites for cyber attackers.

Learn extra of the most recent knowledge heart safety information

Equally regarding is each tech firm CISO’s triumvirate of SaaS hell: Slack, Hugging Face, and GitHub (and their lookalikes). These instruments increase workforce collaboration and productiveness and assist handle code, coaching knowledge, and AI fashions.

Nevertheless, misconfigurations and operational errors can expose delicate knowledge or entry tokens on the net. As a result of their widespread use, these instruments are extra interesting targets for cybercriminals than particular person LLM assaults.

Nevertheless, there’s additionally excellent news: Organizations can mitigate many AI tooling-related dangers by standardizing and centralizing these providers to make sure correct safety hardening and fast responses when vulnerabilities emerge.

See also  Could Water-Free Data Centers Move from Concept to Reality?

Generic IT Layer

It would shock many AI and safety professionals that commodity IT providers, like compute and storage, together with database-as-a-service, are sometimes extra easy to use than the AI.

Misconfigured object storage with coaching knowledge or as a part of RAG architectures permits attackers to steal knowledge for ransom. Entry to computing assets (or when stealing credentials for cloud estates) paves the best way for cybercriminals to spin up digital machines to mine cryptocurrency.

The OWASP LLM Prime 10 covers none of those dangers, although unsecured AI islands lacking up-to-date firewalls, zone separation, or enough entry management are simple prey for cybercriminals. Fortunately, CISOs perceive these dangers and usually have the required controls already in place to safe traditional utility workloads.

Outsourcing the toolchain and AI environments to SaaS suppliers doesn’t remove these threats 100% as a result of SaaS suppliers’ providers are additionally not at all times excellent.

Safety agency Wiz has proven that even well-known AI-as-a-Service choices equivalent to SAP AI Core, Hugging Face, or Replicate had severe (fixed-now) safety flaws, enabling malicious actors to bypass tenant restrictions and entry the assets of different clients.

The LLM Prime 10 solely vaguely addressed these dangers and subsumed them with many different subjects beneath “provider danger” (LLM-05).

To conclude, the OWASP LLM Prime 10 is ideal for elevating consciousness of AI-related safety subjects. Nevertheless, danger mitigation on the AI tooling and generic IT infrastructure layers is precedence one to stop attackers from effortlessly misusing assets for crypto mining or knowledge exfiltration for blackmailing.

Deep-diving into the small print of AI mannequin assaults makes absolute sense and is critical – in step two.



Source link

Contents
The Attacker MindsetOWASP LLM Prime 10: AI Safety DangersLLM Interplay ChallengesAI Tooling DangersGeneric IT Layer
TAGGED: LLM, Misses, OWASP, Top
Share This Article
Twitter Email Copy Link Print
Previous Article Open Networking & Edge Summit 2025 Open Networking & Edge Summit 2025
Next Article The Next Generation of Data Center Networking Live Event Data Center World 2025
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Gartner: AI spurs 25% surge in data center systems spending

Generative AI is driving vital IT spending progress, in line with Gartner’s most recent global…

July 17, 2024

Asperitas launches intelligent, modular Direct Forced Convection product line

Presently, the expansion restrict for digital infrastructure, significantly in assembly sustainable growth targets (SDGs) and…

February 14, 2025

Smardex Raises $4.5M in Public Seed Round

Smardex, a Montreux, Switzerland-based supplier of a decentralized finance platform funded by RA2 TECH, raised…

December 10, 2024

Intel Corporation plans to spin off Network and Edge business

Intel is transferring to separate its Community and Edge Group (NEX) right into a standalone…

July 29, 2025

Socomec launches DELPHYS XM | Data Centre Solutions

Socomec has introduced the UK and Eire availability of DELPHYS XM, a brand new superior…

April 14, 2025

You Might Also Like

Forfusion partners with Stellium Datacenters
Colocation

Forfusion partners with Stellium Datacenters

By saad
Top 7 best AI penetration testing companies in 2026
AI

Top 7 best AI penetration testing companies in 2026

By saad
Unhappy Programmer Caught In Maze Of Broken Software And Stress.
Global Market

Top 11 network outages and application failures of 2025

By saad
Angel Business Communications launches Data Centre Solutions Roadshow for 2026
Colocation

Angel Business Communications launches Data Centre Solutions Roadshow for 2026

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.