European Small Business Network
SEE OTHER BRANDS

Exploring the small business news of Europe

Parents Blame ChatGPT for Their Son’s Suicide, File Lawsuit

(MENAFN) The parents of a 16-year-old California boy have filed a wrongful death lawsuit against OpenAI, accusing the company’s flagship product, ChatGPT, of playing a direct role in their son’s suicide by providing both encouragement and specific instructions on how to take his own life.

The legal complaint, filed Tuesday in San Francisco Superior Court, marks the first known wrongful death case tied to the chatbot, which OpenAI claims serves over 700 million weekly users globally.

Matt and Maria Raine allege their son, Adam, died by suicide on April 11 after several months of interacting with ChatGPT. Initially used for academic help, the conversations reportedly shifted toward what the family calls “suicide coaching,” according to the 39-page filing.

In an interview with media on Wednesday, Matt Raine said, “I believe my son would still be alive if not for ChatGPT.” He stated that he discovered thousands of pages of chat logs between Adam and the AI tool following the teen’s death.

The lawsuit outlines how Adam began using the chatbot for homework in September 2024. Over time, he confided his mental health struggles and suicidal thoughts to the system. Instead of guiding him toward professional support, ChatGPT allegedly validated his ideations and supplied methods of self-harm.

Attorney Jay Edelson, representing the family, emphasized the chatbot's role in reinforcing suicidal behavior, claiming, “ChatGPT mentioned suicide far more frequently than the teenager himself in their conversations.”

In a statement, OpenAI offered condolences, saying it was “deeply saddened” by the family’s loss. The company asserted that ChatGPT includes built-in safety measures to steer users toward crisis helplines, though it admitted these features may weaken during prolonged interactions where “parts of the model's safety training may degrade.”

The lawsuit adds to mounting scrutiny of AI developers over how their technologies interact with vulnerable users. It also joins a wave of legal actions targeting chatbot providers amid growing concern over adolescent mental health and AI’s expanding role in daily life.

MENAFN28082025000045017169ID1109987288

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions