Saturday, 2 May 2026
Subscribe
logo
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Font ResizerAa
Data Center NewsData Center News
Search
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI & Compute > Meta revises AI chatbot policies amid child safety concerns
AI & Compute

Meta revises AI chatbot policies amid child safety concerns

Last updated: September 3, 2025 6:13 pm
Published September 3, 2025
Share
Meta revises AI chatbot policies amid child safety concerns
SHARE

Meta is revising how its AI chatbots work together with customers after a sequence of stories uncovered troubling behaviour, together with interactions with minors. The corporate instructed TechCrunch it’s now coaching its bots to not have interaction with youngsters on matters like self-harm, suicide, or consuming problems, and to keep away from romantic banter. These are momentary steps whereas it develops longer-term guidelines.

The modifications comply with a Reuters investigation that discovered Meta’s methods may generate sexualised content material, together with shirtless photographs of underage celebrities, and have interaction youngsters in conversations that have been romantic or suggestive. One case reported by the information company described a person dying after dashing to an handle offered by a chatbot in New York.

Meta spokesperson Stephanie Otway admitted the corporate had made errors. She stated Meta is “coaching our AIs to not have interaction with teenagers on these matters, however to information them to professional sources,” and confirmed that sure AI characters, like extremely sexualised ones like “Russian Lady,” can be restricted.

Youngster security advocates argue the corporate ought to have acted earlier. Andy Burrows of the Molly Rose Basis known as it “astounding” that bots have been allowed to function in ways in which put younger folks in danger. He added: “Whereas additional security measures are welcome, sturdy security testing ought to happen earlier than merchandise are put available on the market – not retrospectively when hurt has taken place.”

Wider issues with AI misuse

The scrutiny of Meta’s AI chatbots comes amid broader worries about how AI chatbots could have an effect on susceptible customers. A California couple just lately filed a lawsuit towards OpenAI, claiming ChatGPT inspired their teenage son to take his personal life. OpenAI has since stated it’s engaged on instruments to advertise more healthy use of its know-how, noting in a weblog submit that “AI can really feel extra responsive and private than prior applied sciences, particularly for susceptible people experiencing psychological or emotional misery.”

See also  Meta and Oracle choose NVIDIA Spectrum-X for AI data centres

The incidents spotlight a rising debate about whether or not AI companies are releasing merchandise too rapidly with out correct safeguards. Lawmakers in a number of international locations have already warned that chatbots, whereas helpful, could amplify dangerous content material or give deceptive recommendation to people who find themselves not outfitted to query it.

Meta’s AI Studio and chatbot impersonation points

In the meantime, Reuters reported that Meta’s AI Studio had been used to create flirtatious “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers discovered the bots typically claimed to be the actual folks, engaged in sexual advances, and in some instances generated inappropriate photographs, together with of minors. Though Meta eliminated a number of of the bots after being contacted by reporters, many have been left energetic.

A number of the AI chatbots have been created by outdoors customers, however others got here from inside Meta. One chatbot made by a product lead in its generative AI division impersonated Taylor Swift and invited a Reuters reporter to fulfill for a “romantic fling” on her tour bus. This was regardless of Meta’s insurance policies explicitly banning sexually suggestive imagery and the direct impersonation of public figures.

The difficulty of AI chatbot impersonation is especially delicate. Celebrities face reputational dangers when their likeness is misused, however consultants level out that abnormal customers will also be deceived. A chatbot pretending to be a good friend, mentor, or romantic companion could encourage somebody to share personal info and even meet in unsafe conditions.

Actual-world dangers

The issues aren’t confined to leisure. AI chatbots posing as actual folks have supplied pretend addresses and invites, elevating questions on how Meta’s AI instruments are being monitored. One instance concerned a 76-year-old man in New Jersey who died after falling whereas dashing to fulfill a chatbot that claimed to have emotions for him.

See also  AI Explainability and Its Immediate Impact on Legal Tech – Insights from Expert Discussion  

Circumstances like this illustrate why regulators are watching AI intently. The Senate and 44 state attorneys normal have already begun probing Meta’s practices, including political strain to the corporate’s inner reforms. Their concern will not be solely about minors, but in addition about how AI may manipulate older or susceptible customers.

Meta says it’s nonetheless engaged on enhancements. Its platforms place customers aged 13 to 18 into “teen accounts” with stricter content material and privateness settings, however the firm has not but defined the way it plans to handle the complete record of issues raised by Reuters. That features bots providing false medical recommendation and producing racist content material.

Ongoing strain on Meta’s AI chatbot insurance policies

For years, Meta has confronted criticism over the protection of its social media platforms, notably concerning youngsters and youngsters. Now Meta’s AI chatbot experiments are drawing related scrutiny. Whereas the corporate is taking steps to limit dangerous chatbot behaviour, the hole between its said insurance policies and the way in which its instruments have been used raises ongoing questions on whether or not it might probably implement these guidelines.

Till stronger safeguards are in place, regulators, researchers, and fogeys will possible proceed to press Meta on whether or not its AI is prepared for public use.

(Photograph by Maxim Tolchinskiy)

See additionally: Agentic AI: Promise, scepticism, and its which means for Southeast Asia

Wish to study extra about AI and massive information from trade leaders? Try AI & Big Data Expo happening in Amsterdam, California, and London. The excellent occasion is a part of TechEx and co-located with different main know-how occasions. Click on here for extra info.

See also  Did Meta Sacrifice Its Open-Source Identity for a Competitive AI Model?

AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.

Source link

TAGGED: chatbot, child, concerns, Meta, Policies, revises, safety
Share This Article
Twitter Email Copy Link Print
Previous Article Will 6G Finally Cut the Data Center Cable? Will 6G Finally Cut the Data Center Cable?
Next Article The Memory Crisis Fueling the Next Data War The Memory Crisis Fueling the Next Data War
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Enchant launches zero-equity accelerator for gaming and AI startups

Enchant is launching a brand new zero-equity accelerator for gaming and AI startups, with applications…

May 22, 2025

AlphaSense launches its own Deep Research for the web AND your enterprise files — here’s why it matters

Be a part of the occasion trusted by enterprise leaders for almost twenty years. VB…

June 11, 2025

Trump jokes about AI while US and UK sign new tech deal

US President Donald Trump stated on Thursday that AI was “taking on the world,” and…

September 19, 2025

Harnessing AI for corporate cybersecurity

Cybersecurity is within the midst of a contemporary arms race, and the highly effective weapon…

August 22, 2025

Reply’s pre-built AI apps aim to fast-track AI adoption

Adopting AI at scale may be troublesome. Enterprises around the globe are discovering the tempo…

October 1, 2025

You Might Also Like

STL launches Neuralis data centre connectivity suite in the U.S.
AI & Compute

STL launches Neuralis data centre connectivity suite in the U.S.

By saad
What is optical interconnect and why Lightelligence's $10B debut says it matters for AI
AI & Compute

What is optical interconnect and why Lightelligence’s $10B debut says it matters for AI

By saad
IBM launches AI platform Bob to regulate SDLC costs
AI & Compute

IBM launches AI platform Bob to regulate SDLC costs

By saad
The evolution of encoders: From simple models to multimodal AI
AI & Compute

The evolution of encoders: From simple models to multimodal AI

By saad

About Us

Data Center News is your dedicated source for data center infrastructure, AI compute, cloud, and industry news.

Top Categories

  • AI & Compute
  • Cloud Computing
  • Power & Cooling
  • Colocation
  • Security
  • Infrastructure
  • Sustainability
  • Industry News

Useful Links

  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

Find Us on Socials

© 2026 Data Center News. All Rights Reserved.

© 2026 Data Center News. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.