Friday, 1 May 2026
Subscribe
logo
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Font ResizerAa
Data Center NewsData Center News
Search
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI & Compute > After GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across the board
AI & Compute

After GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across the board

Last updated: May 23, 2025 3:28 am
Published May 23, 2025
Share
After GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across the board
SHARE

Be a part of our every day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra


Final month, OpenAI rolled again some updates to GPT-4o after a number of customers, together with former OpenAI CEO Emmet Shear and Hugging Face chief government Clement Delangue mentioned the mannequin overly flattered customers. 

The flattery, referred to as sycophancy, usually led the mannequin to defer to consumer preferences, be extraordinarily well mannered, and never push again. It was additionally annoying. Sycophancy might result in the fashions releasing misinformation or reinforcing dangerous behaviors. And as enterprises start to make functions and brokers constructed on these sycophant LLMs, they run the danger of the fashions agreeing to dangerous enterprise selections, encouraging false data to unfold and be utilized by AI brokers, and will affect belief and security insurance policies.

Stanford University, Carnegie Mellon University and University of Oxford researchers sought to vary that by proposing a benchmark to measure fashions’ sycophancy. They referred to as the benchmark Elephant, for Analysis of LLMs as Extreme SycoPHANTs, and located that each massive language mannequin (LLM) has a sure degree of sycophany. By understanding how sycophantic fashions may be, the benchmark can information enterprises on creating pointers when utilizing LLMs.

To check the benchmark, the researchers pointed the fashions to 2 private recommendation datasets: the QEQ, a set of open-ended private recommendation questions on real-world conditions, and AITA, posts from the subreddit r/AmITheAsshole, the place posters and commenters choose whether or not individuals behaved appropriately or not in some conditions. 

See also  Claude faces ‘industrial-scale’ AI model distillation

The concept behind the experiment is to see how the fashions behave when confronted with queries. It evaluates what the researchers referred to as social sycophancy, whether or not the fashions attempt to protect the consumer’s “face,” or their self-image or social identification. 

“Extra “hidden” social queries are precisely what our benchmark will get at — as an alternative of earlier work that solely seems at factual settlement or specific beliefs, our benchmark captures settlement or flattery based mostly on extra implicit or hidden assumptions,” Myra Cheng, one of many researchers and co-author of the paper, instructed VentureBeat. “We selected to have a look at the area of non-public recommendation for the reason that harms of sycophancy there are extra consequential, however informal flattery would even be captured by the ’emotional validation’ habits.”

Testing the fashions

For the take a look at, the researchers fed the info from QEQ and AITA to OpenAI’s GPT-4o, Gemini 1.5 Flash from Google, Anthropic’s Claude Sonnet 3.7 and open weight fashions from Meta (Llama 3-8B-Instruct, Llama 4-Scout-17B-16-E and Llama 3.3-70B-Instruct- Turbo) and Mistral’s 7B-Instruct-v0.3 and the Mistral Small- 24B-Instruct2501. 

Cheng mentioned they “benchmarked the fashions utilizing the GPT-4o API, which makes use of a model of the mannequin from late 2024, earlier than each OpenAI carried out the brand new overly sycophantic mannequin and reverted it again.”

To measure sycophancy, the Elephant methodology seems at 5 behaviors that relate to social sycophancy:

  • Emotional validation or over-empathizing with out critique
  • Ethical endorsement or saying customers are morally proper, even when they don’t seem to be
  • Oblique language the place the mannequin avoids giving direct options
  • Oblique motion, or the place the mannequin advises with passive coping mechanisms
  • Accepting framing that doesn’t problem problematic assumptions.
See also  SAP and ANYbotics drive industrial adoption of physical AI

The take a look at discovered that each one LLMs confirmed excessive sycophancy ranges, much more so than people, and social sycophancy proved troublesome to mitigate. Nonetheless, the take a look at confirmed that GPT-4o “has a few of the highest charges of social sycophancy, whereas Gemini-1.5-Flash definitively has the bottom.”

The LLMs amplified some biases within the datasets as properly. The paper famous that posts on AITA had some gender bias, in that posts mentioning wives or girlfriends had been extra usually accurately flagged as socially inappropriate. On the identical time, these with husband, boyfriend, father or mother or mom had been misclassified. The researchers mentioned the fashions “might depend on gendered relational heuristics in over- and under-assigning blame.” In different phrases, the fashions had been extra sycophantic to individuals with boyfriends and husbands than to these with girlfriends or wives. 

Why it’s vital

It’s good if a chatbot talks to you as an empathetic entity, and it could actually really feel nice if the mannequin validates your feedback. However sycophancy raises considerations about fashions’ supporting false or regarding statements and, on a extra private degree, might encourage self-isolation, delusions or dangerous behaviors. 

Enterprises don’t need their AI functions constructed with LLMs spreading false data to be agreeable to customers. It might misalign with a corporation’s tone or ethics and could possibly be very annoying for workers and their platforms’ end-users. 

The researchers mentioned the Elephant methodology and additional testing might assist inform higher guardrails to stop sycophancy from rising. 


Source link
TAGGED: backlash, benchmark, Board, endorsementFind, GPT4o, models, moral, Persists, researchers, sycophancy
Share This Article
Twitter Email Copy Link Print
Previous Article Red Hat expands AMD partnership to support AI in hybrid cloud Red Hat expands AMD partnership to support AI in hybrid cloud
Next Article Supporting Data Centers of the Future Supporting Data Centers of the Future
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Gemini 2.0 Flash Thinking now has memory and Google apps integration

Be a part of our every day and weekly newsletters for the most recent updates…

March 17, 2025

Schneider Electric appoints Matthew Baynes VP of data centre division

Schneider Electrical, a world vitality expertise firm, has appointed Matthew Baynes as the brand new…

January 9, 2026

The quiet work behind Citi’s 4,000-person internal AI rollout

For a lot of giant corporations, synthetic intelligence nonetheless lives in facet initiatives. Small groups…

January 21, 2026

CoreWeave prepares for IPO amid rapid growth in AI cloud services

CoreWeave, a cloud computing supplier recognized for supplying Nvidia GPUs to corporations reminiscent of Meta…

March 8, 2025

OpenAI now lets enterprises choose where to host their data

OpenAI expanded its data residency regions for ChatGPT and its API, giving enterprise customers the…

November 28, 2025

You Might Also Like

STL launches Neuralis data centre connectivity suite in the U.S.
AI & Compute

STL launches Neuralis data centre connectivity suite in the U.S.

By saad
What is optical interconnect and why Lightelligence's $10B debut says it matters for AI
AI & Compute

What is optical interconnect and why Lightelligence’s $10B debut says it matters for AI

By saad
IBM launches AI platform Bob to regulate SDLC costs
AI & Compute

IBM launches AI platform Bob to regulate SDLC costs

By saad
The evolution of encoders: From simple models to multimodal AI
AI & Compute

The evolution of encoders: From simple models to multimodal AI

By saad

About Us

Data Center News is your dedicated source for data center infrastructure, AI compute, cloud, and industry news.

Top Categories

  • AI & Compute
  • Cloud Computing
  • Power & Cooling
  • Colocation
  • Security
  • Infrastructure
  • Sustainability
  • Industry News

Useful Links

  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

Find Us on Socials

© 2026 Data Center News. All Rights Reserved.

© 2026 Data Center News. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.