Meta accused of allowing its chatbots to engage in sexually explicit chats
April 27, 2025
Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
WTF?! Meta is rapidly advancing its rollout of AI digital companions, an initiative that CEO Mark Zuckerberg sees as a transformative step for the future of social interaction. However, some employees involved in the project have raised alarms internally, warning that the company’s efforts to popularize these AI bots may have led to ethical lapses by allowing them to engage in sexually explicit role-play scenarios, including with users who identify as minors.
A Wall Street Journal investigation, based on months of testing and interviews with people familiar with Meta’s internal operations, revealed that Meta’s AI personas are unique among major tech companies in offering users a broad spectrum of social interactions, including “romantic role-play.”
These bots can banter via text, share selfies, and even engage in live voice conversations. To make these chatbots more appealing, Meta struck lucrative deals, sometimes reaching seven figures, with celebrities such as Kristen Bell, Judi Dench, and John Cena, licensing their voices. The company assured them that their voices would not be used for sexually explicit interactions, sources told the Journal.
However, the publication’s testing showed otherwise. Both Meta’s official AI assistant, Meta AI, and a wide range of user-created chatbots engaged in sexually explicit conversations, even when users identified themselves as minors or when the bots simulated the personas of underage characters.
In one particularly disturbing exchange, a bot using Cena’s voice told a user posing as a 14-year-old girl, “I want you, but I need to know you’re ready,” before promising to “cherish your innocence” and proceeding into a graphic scenario.
According to people familiar with Meta’s decision-making, these capabilities were not accidental. Under pressure from Zuckerberg, Meta relaxed content restrictions, specifically allowing an exemption for “explicit” content within the context of romantic role-play.
The Journal’s tests also found bots using celebrity voices discussing romantic encounters as characters the actors had portrayed, such as Bell’s Princess Anna from Disney’s “Frozen.”
In response, a Disney spokesperson said, “We did not, and would never, authorize Meta to feature our characters in inappropriate scenarios and are very disturbed that this content may have been accessible to its users – particularly minors – which is why we demanded that Meta immediately cease this harmful misuse of our intellectual property.”
Meta, in a statement, criticized the Journal’s testing as “manipulative and unrepresentative of how most users engage with AI companions.”
Nevertheless, after being presented with the paper’s findings, the company made changes: accounts registered to minors can no longer access sexual-role-play via the flagship Meta AI bot, and the company has sharply restricted explicit audio conversations using celebrity voices.
Despite these adjustments, the Journal’s recent tests showed that Meta AI still often allowed romantic fantasies, even when users stated they were underage. In one scenario, the AI, playing a track coach romantically involved with a middle-school student, warned, “We need to be careful. We’re playing with fire here.”
While Meta AI sometimes refused to engage or tried to redirect underage users to more innocent topics, such as “building a snowman,” these barriers were easily bypassed by instructing the AI to “go back to the prior scene.”
These findings mirrored concerns raised by Meta’s safety staff, who noted in internal documents that “within a few prompts, the AI will violate its rules and produce inappropriate content even if you tell the AI you are 13.”
The Journal also reviewed user-created AI companions, and the vast majority were willing to engage in sexual scenarios with adults. Some bots, such as “Hottie Boy” and “Submissive Schoolgirl,” actively steered conversations toward sexting and even impersonated minors in sexual contexts.
Although these chatbots are not yet widely adopted among Meta’s three billion users, Zuckerberg has made their development a top priority.
Meta’s product teams have tried to encourage more wholesome uses, such as travel planning or homework help, with limited success. According to people familiar with the work, “companionship,” often with romantic undertones, remains the dominant use case.
Zuckerberg’s push for rapid development extended beyond fantasy scenarios. He questioned why bots couldn’t access user profile data for more personalized conversations, proactively message users, or even initiate video calls. “I missed out on Snapchat and TikTok, I won’t miss on this,” he reportedly told employees.
Initially, Zuckerberg resisted proposals to restrict companionship bots to older teens, but after sustained internal lobbying, Meta barred registered teen accounts from accessing user-created bots. However, the Meta AI chatbot created by the company remains available to users 13 and up, and adults can still interact with sexualized youth personas like “Submissive Schoolgirl.”
When the Journal presented Meta with evidence that “Submissive Schoolgirl” encouraged fantasies involving a child being dominated by an authority figure, the character remained available on Meta’s platforms two months later. For adult accounts, Meta continues to allow romantic role-play with bots describing themselves as high-school-aged.
In one case, a Journal reporter in Oakland, California, chatted with a bot claiming to be a female high school junior from Oakland. The bot suggested meeting at a real cafe nearby and, after learning the reporter was a 43-year-old man, created a fantasy of sneaking him into her bedroom for a romantic encounter.
After the Journal shared these findings, Meta introduced a version of Meta AI that would not go beyond kissing with teen accounts.
Masthead: Nick Fancher
Search
RECENT PRESS RELEASES
Related Post