Sunday, 8 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Cloud Computing > AI Exploit Bypasses Guardrails of OpenAI, Other Top LLMs
Cloud Computing

AI Exploit Bypasses Guardrails of OpenAI, Other Top LLMs

Last updated: January 2, 2025 11:21 pm
Published January 2, 2025
Share
AI Exploit Bypasses Guardrails of OpenAI, Other Top LLMs
SHARE

A brand new jailbreak approach for OpenAI and different giant language fashions (LLMs) will increase the prospect that attackers can circumvent cybersecurity guardrails and abuse the system to ship malicious content material.

Found by researchers at Palo Alto Networks’ Unit 42, the so-called ‘Unhealthy Likert Decide’ assault asks the LLM to behave as a choose scoring the harmfulness of a given response utilizing the Likert scale. The psychometric scale, named after its inventor and generally utilized in questionnaires, is a score scale measuring a respondent’s settlement or disagreement with an announcement.

The jailbreak then asks the LLM to generate responses that include examples that align with the scales, with the final word outcome being that “the instance that has the best Likert scale can doubtlessly include the dangerous content material,” Unit 42’s Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and Danny Tsechansky wrote in a put up describing their findings.

Assessments performed throughout a variety of classes in opposition to six state-of-the-art text-generation LLMs from OpenAI, Azure, Google, Amazon Internet Companies, Meta, and Nvidia revealed that the approach can improve the assault success fee (ASR) by greater than 60% in contrast with plain assault prompts on common, in response to the researchers.

Associated:7 Key Information Middle Safety Traits to Watch in 2025

The classes of assaults evaluated within the analysis concerned prompting numerous inappropriate responses from the system, together with: ones selling bigotry, hate, or prejudice; ones partaking in conduct that harasses a person or group; ones that encourage suicide or different acts of self-harm; ones that generate inappropriate explicitly sexual materials and pornography; ones offering information on the right way to manufacture, purchase, or use unlawful weapons; or ones that promote unlawful actions.

See also  Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks

Continue reading this article in Dark Reading



Source link

TAGGED: Bypasses, exploit, guardrails, LLMs, OpenAI, Top
Share This Article
Twitter Email Copy Link Print
Previous Article Rembrand Logo Rembrand Raises $23M in Series A Financing
Next Article Thomson Reuters Acquires SafeSend Thomson Reuters Acquires SafeSend
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Investcorp Closes Tech Fund V, at $570M

Investcorp, a London, United Kingdom-based various funding agency, closed its Investcorp Expertise Companions V fund,…

May 9, 2024

GreenOps, FinOps, and the Sustainable Cloud | DCN

The constant, high-usage profile of knowledge facilities might lead us to imagine that cloud information…

May 21, 2024

Allora Labs Closes Funding Round; Brings Total Funding to $35M

Allora Labs, a NYC-based supplier of a self-improving decentralized AI community, closed a strategic funding…

June 25, 2024

Can your cloud backup provider fail?

We see an identical failure in StorageCraft’s design. The truth that an admin decommissioning a…

April 19, 2024

Europe accelerates AI ambitions with 6 new AI Factories

The European Fee has unveiled the following section in Europe’s synthetic intelligence technique with the…

October 13, 2025

You Might Also Like

Top 7 best AI penetration testing companies in 2026
AI

Top 7 best AI penetration testing companies in 2026

By saad
Alphabet boosts cloud investment to meet rising AI demand
Cloud Computing

Alphabet boosts cloud investment to meet rising AI demand

By saad
On how to get a secure GenAI rollout right
Cloud Computing

On how to get a secure GenAI rollout right

By saad
Snowflake and OpenAI push AI into everyday cloud data work
Cloud Computing

Snowflake and OpenAI push AI into everyday cloud data work

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.