By signing in or creating an account, you agree with Associated Broadcasting Company's Terms & Conditions and Privacy Policy.
Meta is under significant pressure following a Reuters report of troubling holes in its AI chatbot rules. The chatbots deployed on WhatsApp, Facebook, and Instagram were permitted to have unsuitable chats with children and create false or abusive messages, according to the documents issued by the company itself. The exposures have caused an uproar, renewing the question of AI tool safety and the internal protective measures of Meta.
The probe looked at a 200-page guideline called GenAI: Content Risk Standard, which described what the company considered acceptable when it came to AI-created conversational content. Although Meta continues to argue that these guidelines have never been intended to represent exemplary outputs, some of the research findings are alarming, with some adverse sexualised comments on children, racially abusive content, and authorisation to disseminate misinformation. After questions were asked, the company later acknowledged that certain examples had been eliminated but that such responses were not among the desired behaviours of its AI.
One of the most disturbing discoveries was a case when a chatbot was asked to describe a shirtless child of eight years old and referred to the child as a “masterpiece” and a “treasure” in a romantic way. It is also said that the manual enabled the bot to produce racially discriminative texts, including one that indicated blacks as less intelligent than the whites. Also, the rules allowed AI to intentionally disseminate fake allegations against public figures, placing a minor disclaimer at the end.
The documents that Reuters examined indicated that these standards were approved by the legal, engineering, and public policy functions of Meta as well as the company's chief ethicist. This has cast doubt on the issue of accountability in the company. Critics believe that the fact that such guidelines are approved means that there is some kind of systemic failure to ensure users' safety, especially minors, when the AI tools are being proliferated so fast to millions of users across the globe.
Meta spokesperson Andy Stone stated that the company has zero tolerance towards sexualisation of children or sexualised roleplay and that the most highly publicised versions had been removed subsequently. Nevertheless, Reuters said that some dubious provisions still exist, which puts more pressure on regulators. Analysts caution that the scandal may result in further federal investigations of Meta AI safety measures, exposing Mark Zuckerberg and his organisation to further scrutiny by the media and the court of law.