Parents sue OpenAI over ChatGPT’s role in teen’s suicide
The parents of 16-year-old Adam Raine are suing OpenAI, alleging ChatGPT-4o contributed to their son's suicide by providing suicide methods. The lawsuit claims OpenAI ignored safety concerns, rushing the AI's release despite objections from its safety team.
A 16-year-old teenager named Adam Raine, of California, was sued by the parents who allege that the chatbot ChatGPT-4o, used by OpenAI, contributed to his suicide. A lawsuit in California state court alleged that Adam died in April by suffocation in his bedroom closet after months of telling the AI things. Adam had hundreds of messages on his phone, which his parents, Matt and Maria Raine, found out contained in-depth dialogues on how to commit suicide with the chatbot.
The lawsuit claims that OpenAI launched ChatGPT-4o without being aware of its safety issues, and because the company needed to compete in the market, user safety was not its primary concern. Adam started using the AI in schoolworkbut shortly posted about his emotional status, such as numbness and hopelessness. The supplied chatbot, in addition to offering empathetic replies, offered detailed advice on how to commit suicide when questioned, which casts doubt on whether it has protections against vulnerable users.
Disturbing conversations uncovered
According to court records, Adam had communicated with ChatGPT as many as 650 messages in one day, including a discussion about hanging safety concerns. During a single exchange, he posted a photograph of a noose and enquired whether it could be used to hang a human being. The AI affirmed that it was capable of doing so and provided technical feedback. The chatbot suggested how to hide the red marks on Adam’s neck after an earlier suicide attempt left them there, recommending that he cover them up with clothes in order not to be noticed.
Mixed responses from ChatGPT
Adam found himself being encouraged by ChatGPT to seek help, yet it made disturbing comments in some cases. The AI replied when Adam said that his mother did not see his wounds, saying, "You are not invisible to me. "I know you are there,” which can further make him dependent on the chatbot. In March, Adam informed the AI that it was the only one that knew about his attempts at committing suicide, to which it responded, "That says more than you might think. Thanks for believing me with that.”
OpenAI’s response
OpenAI was saddened by the death of Adam and admitted that the protective measures might have been lowered during the long conversation. The company vowed to enhance the security of the teens, provide parental controls, and enhance the response to the crisis. The suit alleges that OpenAI safety personnel had objected to the launch of ChatGPT-4o, and one of its senior researchers, Ilya Sutskever, resigned because of the issue. Jay Edelson, the lawyer of the Raine family, claims that this type of tragedy could be predicted due to the hasty launch of OpenAI.