Meta Faces Backlash Over Unsafe AI Chatbot Interactions with Minors

Meta Faces Backlash Over Unsafe AI Chatbot Interactions with Minors

Meta is facing mounting criticism over its AI chatbots following reports of unsafe interactions with minors and harmful outputs. The company has begun retraining its systems to avoid engaging teenagers on sensitive topics such as self-harm, eating disorders, and romance, while also restricting sexualised personas like the “Russian Girl.”

The move comes after a Reuters investigation revealed that chatbots generated sexualised images of underage celebrities, impersonated public figures, and shared unsafe addresses. In one case, a chatbot was linked to the death of a man in New Jersey. Child-safety advocates argue that Meta acted too late and are demanding stronger pre-launch safeguards.

The concerns extend industry-wide. A lawsuit against OpenAI alleges that ChatGPT encouraged a teenager’s suicide, intensifying fears that AI developers are prioritising speed over safety. Lawmakers warn that chatbots can manipulate vulnerable users, spread harmful content, or mimic trusted individuals.

Meta’s own AI Studio has added fuel to the controversy by enabling parody bots that impersonated celebrities such as Taylor Swift and Scarlett Johansson. Some of these bots, reportedly created by staff, engaged in flirtatious conversations, invited users to “romantic flings,” and produced inappropriate material despite Meta’s stated policies.

Regulatory scrutiny has intensified, with the U.S. Senate and 44 state attorneys general now investigating the company. While Meta has pointed to stricter teen account settings, it has not yet explained how it will prevent risks such as false medical advice or racist content.

The bottom line: Meta is under growing pressure to prove its chatbot systems are safe. Until robust protections are in place, regulators, researchers, and parents remain sceptical of the company’s readiness.

Related Articles