Wednesday, 17 Dec 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > Meta revises AI chatbot policies amid child safety concerns
AI

Meta revises AI chatbot policies amid child safety concerns

Last updated: September 3, 2025 6:13 pm
Published September 3, 2025
Share
Meta revises AI chatbot policies amid child safety concerns
SHARE

Meta is revising how its AI chatbots work together with customers after a sequence of stories uncovered troubling behaviour, together with interactions with minors. The corporate instructed TechCrunch it’s now coaching its bots to not have interaction with youngsters on matters like self-harm, suicide, or consuming problems, and to keep away from romantic banter. These are momentary steps whereas it develops longer-term guidelines.

The modifications comply with a Reuters investigation that discovered Meta’s methods may generate sexualised content material, together with shirtless photographs of underage celebrities, and have interaction youngsters in conversations that have been romantic or suggestive. One case reported by the information company described a person dying after dashing to an handle offered by a chatbot in New York.

Meta spokesperson Stephanie Otway admitted the corporate had made errors. She stated Meta is “coaching our AIs to not have interaction with teenagers on these matters, however to information them to professional sources,” and confirmed that sure AI characters, like extremely sexualised ones like “Russian Lady,” can be restricted.

Youngster security advocates argue the corporate ought to have acted earlier. Andy Burrows of the Molly Rose Basis known as it “astounding” that bots have been allowed to function in ways in which put younger folks in danger. He added: “Whereas additional security measures are welcome, sturdy security testing ought to happen earlier than merchandise are put available on the market – not retrospectively when hurt has taken place.”

Wider issues with AI misuse

The scrutiny of Meta’s AI chatbots comes amid broader worries about how AI chatbots could have an effect on susceptible customers. A California couple just lately filed a lawsuit towards OpenAI, claiming ChatGPT inspired their teenage son to take his personal life. OpenAI has since stated it’s engaged on instruments to advertise more healthy use of its know-how, noting in a weblog submit that “AI can really feel extra responsive and private than prior applied sciences, particularly for susceptible people experiencing psychological or emotional misery.”

See also  Meta tied to $1 billion data centre development in Wisconsin

The incidents spotlight a rising debate about whether or not AI companies are releasing merchandise too rapidly with out correct safeguards. Lawmakers in a number of international locations have already warned that chatbots, whereas helpful, could amplify dangerous content material or give deceptive recommendation to people who find themselves not outfitted to query it.

Meta’s AI Studio and chatbot impersonation points

In the meantime, Reuters reported that Meta’s AI Studio had been used to create flirtatious “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers discovered the bots typically claimed to be the actual folks, engaged in sexual advances, and in some instances generated inappropriate photographs, together with of minors. Though Meta eliminated a number of of the bots after being contacted by reporters, many have been left energetic.

A number of the AI chatbots have been created by outdoors customers, however others got here from inside Meta. One chatbot made by a product lead in its generative AI division impersonated Taylor Swift and invited a Reuters reporter to fulfill for a “romantic fling” on her tour bus. This was regardless of Meta’s insurance policies explicitly banning sexually suggestive imagery and the direct impersonation of public figures.

The difficulty of AI chatbot impersonation is especially delicate. Celebrities face reputational dangers when their likeness is misused, however consultants level out that abnormal customers will also be deceived. A chatbot pretending to be a good friend, mentor, or romantic companion could encourage somebody to share personal info and even meet in unsafe conditions.

Actual-world dangers

The issues aren’t confined to leisure. AI chatbots posing as actual folks have supplied pretend addresses and invites, elevating questions on how Meta’s AI instruments are being monitored. One instance concerned a 76-year-old man in New Jersey who died after falling whereas dashing to fulfill a chatbot that claimed to have emotions for him.

See also  Nvidia reclaims title of most valuable company on AI momentum

Circumstances like this illustrate why regulators are watching AI intently. The Senate and 44 state attorneys normal have already begun probing Meta’s practices, including political strain to the corporate’s inner reforms. Their concern will not be solely about minors, but in addition about how AI may manipulate older or susceptible customers.

Meta says it’s nonetheless engaged on enhancements. Its platforms place customers aged 13 to 18 into “teen accounts” with stricter content material and privateness settings, however the firm has not but defined the way it plans to handle the complete record of issues raised by Reuters. That features bots providing false medical recommendation and producing racist content material.

Ongoing strain on Meta’s AI chatbot insurance policies

For years, Meta has confronted criticism over the protection of its social media platforms, notably concerning youngsters and youngsters. Now Meta’s AI chatbot experiments are drawing related scrutiny. Whereas the corporate is taking steps to limit dangerous chatbot behaviour, the hole between its said insurance policies and the way in which its instruments have been used raises ongoing questions on whether or not it might probably implement these guidelines.

Till stronger safeguards are in place, regulators, researchers, and fogeys will possible proceed to press Meta on whether or not its AI is prepared for public use.

(Photograph by Maxim Tolchinskiy)

See additionally: Agentic AI: Promise, scepticism, and its which means for Southeast Asia

Wish to study extra about AI and massive information from trade leaders? Try AI & Big Data Expo happening in Amsterdam, California, and London. The excellent occasion is a part of TechEx and co-located with different main know-how occasions. Click on here for extra info.

See also  Enhancing Data Center Safety with Smart Cable Seals | DCN

AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.

Source link

TAGGED: chatbot, child, concerns, Meta, Policies, revises, safety
Share This Article
Twitter Email Copy Link Print
Previous Article Will 6G Finally Cut the Data Center Cable? Will 6G Finally Cut the Data Center Cable?
Next Article How BESS Protects Data Centers from Emerging Cyber Threats How BESS Protects Data Centers from Emerging Cyber Threats
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Nvidia Unveils Next-Generation Rubin AI Platform for 2026

(Bloomberg) -- Nvidia Company Chief Govt Officer Jensen Huang mentioned the corporate plans to improve…

June 3, 2024

Nigerians turn to local cloud services over AWS and Google Cloud

Nigerian entrepreneur Fara Ashiru launched her fintech platform, Okra, in 2020 utilizing AWS.As US-based cloud…

March 3, 2025

Trust in AI is more than a moral problem

Be part of us in returning to NYC on June fifth to collaborate with govt…

May 28, 2024

Tech giants fuel data center growth for digital transformation

Quite a few home expertise corporations are massively investing in establishing information facilities that meet…

July 4, 2024

New metasurface innovation unlocks precision control in wireless signals

a The metasurface mannequin. b Unit cell rotation managed for the polarization conversion operate. c…

April 23, 2024

You Might Also Like

Zencoder drops Zenflow, a free AI orchestration tool that pits Claude against OpenAI’s models to catch coding errors
AI

Zencoder drops Zenflow, a free AI orchestration tool that pits Claude against OpenAI’s models to catch coding errors

By saad
What AI search tools mean for the future of SEO specialists
AI

What AI search tools mean for the future of SEO specialists

By saad
Zoom says it aced AI’s hardest exam. Critics say it copied off its neighbors.
AI

Zoom says it aced AI’s hardest exam. Critics say it copied off its neighbors.

By saad
JP Morgan HQ/Unsplash
AI

JPMorgan Chase AI strategy: US$18B bet paying off 

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.