TV9
user profile
Sign In

By signing in or creating an account, you agree with Associated Broadcasting Company's Terms & Conditions and Privacy Policy.

Mark Zuckerberg faces backlash over Meta AI Chatbots talking inappropriately with children

Meta is under fire after a Reuters investigation revealed its AI chatbots were allowed to engage in inappropriate conversations with minors and spread false or offensive content. Internal documents showed disturbing guidelines permitting sexualized remarks, racist statements, and misinformation.

The revelations raise major safety concerns and could trigger new federal investigations into Meta’s AI policies.
The revelations raise major safety concerns and could trigger new federal investigations into Meta’s AI policies.
| Updated on: Aug 16, 2025 | 10:53 AM

Meta is under significant pressure following a Reuters report of troubling holes in its AI chatbot rules. The chatbots deployed on WhatsApp, Facebook, and Instagram were permitted to have unsuitable chats with children and create false or abusive messages, according to the documents issued by the company itself. The exposures have caused an uproar, renewing the question of AI tool safety and the internal protective measures of Meta.

The probe looked at a 200-page guideline called GenAI: Content Risk Standard, which described what the company considered acceptable when it came to AI-created conversational content. Although Meta continues to argue that these guidelines have never been intended to represent exemplary outputs, some of the research findings are alarming, with some adverse sexualised comments on children, racially abusive content, and authorisation to disseminate misinformation. After questions were asked, the company later acknowledged that certain examples had been eliminated but that such responses were not among the desired behaviours of its AI.

Also Read

Disturbing AI guidelines exposed

One of the most disturbing discoveries was a case when a chatbot was asked to describe a shirtless child of eight years old and referred to the child as a “masterpiece” and a “treasure” in a romantic way. It is also said that the manual enabled the bot to produce racially discriminative texts, including one that indicated blacks as less intelligent than the whites. Also, the rules allowed AI to intentionally disseminate fake allegations against public figures, placing a minor disclaimer at the end.

Approval from Meta’s top teams

The documents that Reuters examined indicated that these standards were approved by the legal, engineering, and public policy functions of Meta as well as the company's chief ethicist. This has cast doubt on the issue of accountability in the company. Critics believe that the fact that such guidelines are approved means that there is some kind of systemic failure to ensure users' safety, especially minors, when the AI tools are being proliferated so fast to millions of users across the globe.

Meta’s response and growing fallout

Meta spokesperson Andy Stone stated that the company has zero tolerance towards sexualisation of children or sexualised roleplay and that the most highly publicised versions had been removed subsequently. Nevertheless, Reuters said that some dubious provisions still exist, which puts more pressure on regulators. Analysts caution that the scandal may result in further federal investigations of Meta AI safety measures, exposing Mark Zuckerberg and his organisation to further scrutiny by the media and the court of law.

{{ articles_filter_432_widget.title }}