Sunday, 14 Dec 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > Meta revises AI chatbot policies amid child safety concerns
AI

Meta revises AI chatbot policies amid child safety concerns

Last updated: September 3, 2025 6:13 pm
Published September 3, 2025
Share
Meta revises AI chatbot policies amid child safety concerns
SHARE

Meta is revising how its AI chatbots work together with customers after a sequence of stories uncovered troubling behaviour, together with interactions with minors. The corporate instructed TechCrunch it’s now coaching its bots to not have interaction with youngsters on matters like self-harm, suicide, or consuming problems, and to keep away from romantic banter. These are momentary steps whereas it develops longer-term guidelines.

The modifications comply with a Reuters investigation that discovered Meta’s methods may generate sexualised content material, together with shirtless photographs of underage celebrities, and have interaction youngsters in conversations that have been romantic or suggestive. One case reported by the information company described a person dying after dashing to an handle offered by a chatbot in New York.

Meta spokesperson Stephanie Otway admitted the corporate had made errors. She stated Meta is “coaching our AIs to not have interaction with teenagers on these matters, however to information them to professional sources,” and confirmed that sure AI characters, like extremely sexualised ones like “Russian Lady,” can be restricted.

Youngster security advocates argue the corporate ought to have acted earlier. Andy Burrows of the Molly Rose Basis known as it “astounding” that bots have been allowed to function in ways in which put younger folks in danger. He added: “Whereas additional security measures are welcome, sturdy security testing ought to happen earlier than merchandise are put available on the market – not retrospectively when hurt has taken place.”

Wider issues with AI misuse

The scrutiny of Meta’s AI chatbots comes amid broader worries about how AI chatbots could have an effect on susceptible customers. A California couple just lately filed a lawsuit towards OpenAI, claiming ChatGPT inspired their teenage son to take his personal life. OpenAI has since stated it’s engaged on instruments to advertise more healthy use of its know-how, noting in a weblog submit that “AI can really feel extra responsive and private than prior applied sciences, particularly for susceptible people experiencing psychological or emotional misery.”

See also  Move over, Alexa: Amazon launches new realtime voice model Nova Sonic for third-party enterprise development

The incidents spotlight a rising debate about whether or not AI companies are releasing merchandise too rapidly with out correct safeguards. Lawmakers in a number of international locations have already warned that chatbots, whereas helpful, could amplify dangerous content material or give deceptive recommendation to people who find themselves not outfitted to query it.

Meta’s AI Studio and chatbot impersonation points

In the meantime, Reuters reported that Meta’s AI Studio had been used to create flirtatious “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers discovered the bots typically claimed to be the actual folks, engaged in sexual advances, and in some instances generated inappropriate photographs, together with of minors. Though Meta eliminated a number of of the bots after being contacted by reporters, many have been left energetic.

A number of the AI chatbots have been created by outdoors customers, however others got here from inside Meta. One chatbot made by a product lead in its generative AI division impersonated Taylor Swift and invited a Reuters reporter to fulfill for a “romantic fling” on her tour bus. This was regardless of Meta’s insurance policies explicitly banning sexually suggestive imagery and the direct impersonation of public figures.

The difficulty of AI chatbot impersonation is especially delicate. Celebrities face reputational dangers when their likeness is misused, however consultants level out that abnormal customers will also be deceived. A chatbot pretending to be a good friend, mentor, or romantic companion could encourage somebody to share personal info and even meet in unsafe conditions.

Actual-world dangers

The issues aren’t confined to leisure. AI chatbots posing as actual folks have supplied pretend addresses and invites, elevating questions on how Meta’s AI instruments are being monitored. One instance concerned a 76-year-old man in New Jersey who died after falling whereas dashing to fulfill a chatbot that claimed to have emotions for him.

See also  AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl

Circumstances like this illustrate why regulators are watching AI intently. The Senate and 44 state attorneys normal have already begun probing Meta’s practices, including political strain to the corporate’s inner reforms. Their concern will not be solely about minors, but in addition about how AI may manipulate older or susceptible customers.

Meta says it’s nonetheless engaged on enhancements. Its platforms place customers aged 13 to 18 into “teen accounts” with stricter content material and privateness settings, however the firm has not but defined the way it plans to handle the complete record of issues raised by Reuters. That features bots providing false medical recommendation and producing racist content material.

Ongoing strain on Meta’s AI chatbot insurance policies

For years, Meta has confronted criticism over the protection of its social media platforms, notably concerning youngsters and youngsters. Now Meta’s AI chatbot experiments are drawing related scrutiny. Whereas the corporate is taking steps to limit dangerous chatbot behaviour, the hole between its said insurance policies and the way in which its instruments have been used raises ongoing questions on whether or not it might probably implement these guidelines.

Till stronger safeguards are in place, regulators, researchers, and fogeys will possible proceed to press Meta on whether or not its AI is prepared for public use.

(Photograph by Maxim Tolchinskiy)

See additionally: Agentic AI: Promise, scepticism, and its which means for Southeast Asia

Wish to study extra about AI and massive information from trade leaders? Try AI & Big Data Expo happening in Amsterdam, California, and London. The excellent occasion is a part of TechEx and co-located with different main know-how occasions. Click on here for extra info.

See also  Apple releases Depth Pro, an AI model that rewrites the rules of 3D vision

AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.

Source link

TAGGED: chatbot, child, concerns, Meta, Policies, revises, safety
Share This Article
Twitter Email Copy Link Print
Previous Article Will 6G Finally Cut the Data Center Cable? Will 6G Finally Cut the Data Center Cable?
Next Article How BESS Protects Data Centers from Emerging Cyber Threats How BESS Protects Data Centers from Emerging Cyber Threats
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

DC01UK proposes new data centre in North London

To supply the most effective experiences, we use applied sciences like cookies to retailer and/or…

September 9, 2024

The AI-ready data centre – Data Centre Review

With the rise of synthetic intelligence and machine studying reshaping the information centre panorama, Darren…

April 2, 2024

Soter Insure Closes Series A Funding

Soter Insure, an Abu Dhabi, UAE-based supplier of specialised insurance coverage merchandise tailor-made to the…

February 16, 2025

Scaling agentic AI: Inside Atlassian’s culture of experimentation

Scaling agentic AI isn’t nearly having the most recent instruments — it requires clear steerage,…

July 10, 2025

Gesture Receives Growth Funding from Decathlon Capital Partners

Gesture, a NYC-based gifting and supply platform, acquired a progress funding bundle from Decathlon Capital…

April 3, 2024

You Might Also Like

Nous Research just released Nomos 1, an open-source AI that ranks second on the notoriously brutal Putnam math exam
AI

Nous Research just released Nomos 1, an open-source AI that ranks second on the notoriously brutal Putnam math exam

By saad
Enterprise users swap AI pilots for deep integrations
AI

Enterprise users swap AI pilots for deep integrations

By saad
Why most enterprise AI coding pilots underperform (Hint: It's not the model)
AI

Why most enterprise AI coding pilots underperform (Hint: It's not the model)

By saad
Newsweek: Building AI-resilience for the next era of information
AI

Newsweek: Building AI-resilience for the next era of information

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.