ChatGPT creator OpenAI is changing the way its artificial intelligence models respond to users experiencing emotional or mental health issues that make them vulnerable to suicide.
The change was announced on the company's blog after a 16-year-old boy in the United States committed suicide in April, after his family claimed their son had interacted with ChatGPT for months, The Guardian reported Thursday.
In the family's lawsuit documents filed with the California State Superior Court for San Francisco, it was noted that the teen discussed suicide methods with ChatGPT several times, including shortly before his suicide.
