Securing AI programs is a urgent concern for CIOs and CISOs resulting from AI and LLMs’ more and more important function in companies. Thus, they instinctively flip to Open Net Utility Safety Challenge (OWASP) for steering.
OWASP is thought for its Prime 10 checklist of internet utility safety flaws. During the last years, the group has expanded its focus and these days publish a bouquet of ‘Prime 10’ lists for numerous safety subjects, together with one for big language fashions (LLMs). However what does this checklist cowl? Is the menace steering complete?
Earlier than deep-diving into the OWASP LLM Prime 10 checklist, a change of perspective could be an eye-opener for safety professionals. Suppose you’re a cybercriminal: why would you assault an LLM?
The Attacker Mindset
Malicious hacking will not be an instructional endeavor. It’s a enterprise. Cybercriminals assault not what’s theoretically potential however which guarantees a fast monetary return. So, what’s the enterprise case for manipulating AI fashions and LLMs to unfold misinformation? Normally, different assaults are financially extra rewarding, equivalent to:
-
Cryptomining: Misusing the computing energy of compromised AI estates to mine cryptocurrencies – tremendous handy to money in.
-
Blackmail with delicate knowledge after stealing, e.g., affected person particulars, buyer data, or enterprise secrets and techniques, and demanding a ransom for not leaking it.
-
Distributed Denial-of-Service (DDoS) Assaults, i.e., bombarding business-critical programs with requests to deliver them down, typically for demanding ransom or throughout a political disinformation marketing campaign.
Extra superior assault types requiring extra effort, know-how, and assets are:
-
Credential Theft: Stealing credentials to maneuver by means of a company’s programs (lateral motion) to realize entry to extra worthwhile knowledge. When credentials relate to SaaS providers equivalent to ChatGPT, reselling them within the Darknet can also be an choice.
-
Triggering Financially Helpful Actions: Manipulating AI programs to carry out unauthorized actions like monetary transactions – clearly a fairly refined, high-effort assault.
OWASP LLM Prime 10: AI Safety Dangers
When trying on the OWASP LLM Top 10, 5 out of the ten dangers relate to manipulating or attacking the AI mannequin itself:
-
Immediate Injection (LLM01): Hackers manipulate AI programs by submitting requests, aka prompts, to the LLM in order that it behaves exterior its supposed use and generates dangerous or inappropriate outputs.
-
Coaching Knowledge Poisoning (LLM-03): Malicious actors corrupt coaching knowledge, decreasing the standard of AI fashions. The chance is related for publicly obtainable group coaching knowledge, not a lot for inner knowledge. The latter is just like pre-AI fraud or sabotage dangers for database
-
Mannequin Denial-of-Service (LLM04): Overload AI parts with requests to impression their stability and availability, affecting enterprise purposes that depend on them.
-
Delicate Data Disclosure (LLM-07): Exploiting LLMs to launch confidential knowledge resulting from unscrubbed enter knowledge ending in an LLM containing delicate data or lacking filtering of undesirable requests. LLMs miss stringent fine-granular entry management recognized from databases and file programs.
-
Mannequin Theft (LLM10): Hackers would possibly probe programs to know how they perform, which may result in mental property theft.
-
Overreliance on AI (LLM-09): Blind belief in AI outputs can result in improper choices, e.g., when LLMs “hallucinate” and fabricate data. It’s a pure enterprise danger, not associated to IT.
All these dangers listed within the LLM Prime 10 exist, although attackers would possibly wrestle to monetize profitable assaults in lots of eventualities. Organizations can mitigate such danger solely on a per-application or per-model stage, e.g., by pen-testing them periodically.
Architectural Layers and OWASP LLM Prime 10 Dangers
LLM Interplay Challenges
Enterprise advantages include a good integration of AI and LLMs into enterprise processes. The technical coupling of LLMs and different programs introduces technical safety dangers past the launched model-related points. These dangers depend for 4 of the LLM Prime 10:
-
Insecure output dealing with (LLM-02) warns towards feeding the output on to different programs with out cleansing the output towards, e.g., hidden assaults and malicious actions.
-
Extreme company (LLM-08) pertains to LLMs having extra entry rights than vital, e.g., to entry and ship emails, enabling profitable attackers to set off undesired actions in different programs (e.g., deletion of emails).
-
Permission points (LLM-06) relate to unclear authentication and authorization checks. The LLMs or their plugins would possibly make assumptions concerning customers and roles that aren’t assured by different parts.
-
Insecure plugin design (LLM-10) factors out the chance when APIs don’t depend on concrete, type-checked parameters however settle for free textual content, which could end in malicious habits when processing the request.
All these dangers relate to API hygiene and lacking security-by-design, which bigger organizations would possibly deal with with penetration testing and safety assurance measures.
Whereas exploitation requires excessive investments right now, this could change when LLM providers develop in the direction of ecosystems with widespread Third-party plugins.
Immediately, cybercriminals may see the prospect for mass assaults on vulnerabilities of widespread plugins or exploiting frequent misconfigurations. Skilled vulnerability administration would even be a should within the LLM context.
AI Tooling Dangers
Whereas the general public focuses on LLM assaults, the AI infrastructure for coaching and operating them would possibly current a extra important danger, even when corporations depend on SaaS or broadly used AI frameworks.
Points with two (open supply) AI frameworks, the ShadowRay vulnerability (CVE-2023-48022) and ‘Probllama’ (CVE-2024-37032), are latest examples.
Probllama impacts Ollama, a platform for deploying and operating LLMs, the place poor enter validation permits attackers to overwrite recordsdata, probably resulting in distant code execution.
hadowRay permits attackers to submit duties with out authentication – an open invitation for exploitation. Certainly, community zoning and firewalls assist, although (by some means horrifying) they don’t seem to be at all times in place. So, these two examples illustrate how rapidly AI tooling and framework vulnerabilities grow to be invites for cyber attackers.
Learn extra of the most recent knowledge heart safety information
Equally regarding is each tech firm CISO’s triumvirate of SaaS hell: Slack, Hugging Face, and GitHub (and their lookalikes). These instruments increase workforce collaboration and productiveness and assist handle code, coaching knowledge, and AI fashions.
Nevertheless, misconfigurations and operational errors can expose delicate knowledge or entry tokens on the net. As a result of their widespread use, these instruments are extra interesting targets for cybercriminals than particular person LLM assaults.
Nevertheless, there’s additionally excellent news: Organizations can mitigate many AI tooling-related dangers by standardizing and centralizing these providers to make sure correct safety hardening and fast responses when vulnerabilities emerge.
Generic IT Layer
It would shock many AI and safety professionals that commodity IT providers, like compute and storage, together with database-as-a-service, are sometimes extra easy to use than the AI.
Misconfigured object storage with coaching knowledge or as a part of RAG architectures permits attackers to steal knowledge for ransom. Entry to computing assets (or when stealing credentials for cloud estates) paves the best way for cybercriminals to spin up digital machines to mine cryptocurrency.
The OWASP LLM Prime 10 covers none of those dangers, although unsecured AI islands lacking up-to-date firewalls, zone separation, or enough entry management are simple prey for cybercriminals. Fortunately, CISOs perceive these dangers and usually have the required controls already in place to safe traditional utility workloads.
Outsourcing the toolchain and AI environments to SaaS suppliers doesn’t remove these threats 100% as a result of SaaS suppliers’ providers are additionally not at all times excellent.
Safety agency Wiz has proven that even well-known AI-as-a-Service choices equivalent to SAP AI Core, Hugging Face, or Replicate had severe (fixed-now) safety flaws, enabling malicious actors to bypass tenant restrictions and entry the assets of different clients.
The LLM Prime 10 solely vaguely addressed these dangers and subsumed them with many different subjects beneath “provider danger” (LLM-05).
To conclude, the OWASP LLM Prime 10 is ideal for elevating consciousness of AI-related safety subjects. Nevertheless, danger mitigation on the AI tooling and generic IT infrastructure layers is precedence one to stop attackers from effortlessly misusing assets for crypto mining or knowledge exfiltration for blackmailing.
Deep-diving into the small print of AI mannequin assaults makes absolute sense and is critical – in step two.