Friday, 30 Jan 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > Meta revises AI chatbot policies amid child safety concerns
AI

Meta revises AI chatbot policies amid child safety concerns

Last updated: September 3, 2025 6:13 pm
Published September 3, 2025
Share
Meta revises AI chatbot policies amid child safety concerns
SHARE

Meta is revising how its AI chatbots work together with customers after a sequence of stories uncovered troubling behaviour, together with interactions with minors. The corporate instructed TechCrunch it’s now coaching its bots to not have interaction with youngsters on matters like self-harm, suicide, or consuming problems, and to keep away from romantic banter. These are momentary steps whereas it develops longer-term guidelines.

The modifications comply with a Reuters investigation that discovered Meta’s methods may generate sexualised content material, together with shirtless photographs of underage celebrities, and have interaction youngsters in conversations that have been romantic or suggestive. One case reported by the information company described a person dying after dashing to an handle offered by a chatbot in New York.

Meta spokesperson Stephanie Otway admitted the corporate had made errors. She stated Meta is “coaching our AIs to not have interaction with teenagers on these matters, however to information them to professional sources,” and confirmed that sure AI characters, like extremely sexualised ones like “Russian Lady,” can be restricted.

Youngster security advocates argue the corporate ought to have acted earlier. Andy Burrows of the Molly Rose Basis known as it “astounding” that bots have been allowed to function in ways in which put younger folks in danger. He added: “Whereas additional security measures are welcome, sturdy security testing ought to happen earlier than merchandise are put available on the market – not retrospectively when hurt has taken place.”

Wider issues with AI misuse

The scrutiny of Meta’s AI chatbots comes amid broader worries about how AI chatbots could have an effect on susceptible customers. A California couple just lately filed a lawsuit towards OpenAI, claiming ChatGPT inspired their teenage son to take his personal life. OpenAI has since stated it’s engaged on instruments to advertise more healthy use of its know-how, noting in a weblog submit that “AI can really feel extra responsive and private than prior applied sciences, particularly for susceptible people experiencing psychological or emotional misery.”

See also  The AI beat goes on...with a farewell | The AI Beat

The incidents spotlight a rising debate about whether or not AI companies are releasing merchandise too rapidly with out correct safeguards. Lawmakers in a number of international locations have already warned that chatbots, whereas helpful, could amplify dangerous content material or give deceptive recommendation to people who find themselves not outfitted to query it.

Meta’s AI Studio and chatbot impersonation points

In the meantime, Reuters reported that Meta’s AI Studio had been used to create flirtatious “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers discovered the bots typically claimed to be the actual folks, engaged in sexual advances, and in some instances generated inappropriate photographs, together with of minors. Though Meta eliminated a number of of the bots after being contacted by reporters, many have been left energetic.

A number of the AI chatbots have been created by outdoors customers, however others got here from inside Meta. One chatbot made by a product lead in its generative AI division impersonated Taylor Swift and invited a Reuters reporter to fulfill for a “romantic fling” on her tour bus. This was regardless of Meta’s insurance policies explicitly banning sexually suggestive imagery and the direct impersonation of public figures.

The difficulty of AI chatbot impersonation is especially delicate. Celebrities face reputational dangers when their likeness is misused, however consultants level out that abnormal customers will also be deceived. A chatbot pretending to be a good friend, mentor, or romantic companion could encourage somebody to share personal info and even meet in unsafe conditions.

Actual-world dangers

The issues aren’t confined to leisure. AI chatbots posing as actual folks have supplied pretend addresses and invites, elevating questions on how Meta’s AI instruments are being monitored. One instance concerned a 76-year-old man in New Jersey who died after falling whereas dashing to fulfill a chatbot that claimed to have emotions for him.

See also  META Region ICT Spending to Surge, AI and Cloud Investments Soar

Circumstances like this illustrate why regulators are watching AI intently. The Senate and 44 state attorneys normal have already begun probing Meta’s practices, including political strain to the corporate’s inner reforms. Their concern will not be solely about minors, but in addition about how AI may manipulate older or susceptible customers.

Meta says it’s nonetheless engaged on enhancements. Its platforms place customers aged 13 to 18 into “teen accounts” with stricter content material and privateness settings, however the firm has not but defined the way it plans to handle the complete record of issues raised by Reuters. That features bots providing false medical recommendation and producing racist content material.

Ongoing strain on Meta’s AI chatbot insurance policies

For years, Meta has confronted criticism over the protection of its social media platforms, notably concerning youngsters and youngsters. Now Meta’s AI chatbot experiments are drawing related scrutiny. Whereas the corporate is taking steps to limit dangerous chatbot behaviour, the hole between its said insurance policies and the way in which its instruments have been used raises ongoing questions on whether or not it might probably implement these guidelines.

Till stronger safeguards are in place, regulators, researchers, and fogeys will possible proceed to press Meta on whether or not its AI is prepared for public use.

(Photograph by Maxim Tolchinskiy)

See additionally: Agentic AI: Promise, scepticism, and its which means for Southeast Asia

Wish to study extra about AI and massive information from trade leaders? Try AI & Big Data Expo happening in Amsterdam, California, and London. The excellent occasion is a part of TechEx and co-located with different main know-how occasions. Click on here for extra info.

See also  Google AI Futures Fund may find going tough in light of DOJ action

AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.

Source link

TAGGED: chatbot, child, concerns, Meta, Policies, revises, safety
Share This Article
Twitter Email Copy Link Print
Previous Article Will 6G Finally Cut the Data Center Cable? Will 6G Finally Cut the Data Center Cable?
Next Article How BESS Protects Data Centers from Emerging Cyber Threats How BESS Protects Data Centers from Emerging Cyber Threats
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Understanding the 3D ice-printing process to create micro-scale structures

The 2D and 3D fashions precisely estimate the geometry of ice constructions ensuing from numerous…

August 3, 2024

95% of Organizations Updated Cybersecurity Strategies in the Past Year

Because the digital panorama continues to evolve at an unprecedented velocity, organizations are racing to…

May 2, 2024

Anthropic just analyzed 700,000 Claude conversations — and found its AI has a moral code of its own

Be part of our every day and weekly newsletters for the most recent updates and…

April 21, 2025

Vantage Data Centers unveils 2nd London campus, with art installation

Vantage Information Facilities, a worldwide chief in hyperscale knowledge options, has introduced the grand opening…

September 12, 2025

Driving Innovation: The Prospects for Latin American Data Centers

datacenterHawk’s LATAM Regional Director, Steve Sasse, lately had an insightful dialogue with Scala Information Facilities’…

August 26, 2024

You Might Also Like

Top 10 AI security tools for enterprises in 2026
AI

Top 10 AI security tools for enterprises in 2026

By saad
How Standard Chartered runs AI under privacy rules
AI

How Standard Chartered runs AI under privacy rules

By saad
Insurers betting big on AI: Accenture
AI

Insurers betting big on AI: Accenture

By saad
Gallup Workforce shows details of AI adoption in US workplaces
AI

Gallup Workforce shows details of AI adoption in US workplaces

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.