Adam’s Story: The Impact of Artificial Intelligence on Mental Health

Please note: This post discusses suicide and self-harm and may be distressing for some readers. If you or someone you know is experiencing thoughts of suicide, you are not alone. Help is available through the U.S. national suicide and crisis helpline—call or text 988 or chat online at 988lifeline.org.

Adam Raine was a beloved son, brother, and friend. He loved basketball, Japanese anime and manga, reading, and playing video games. As he entered his teens, his previous exuberance and affability became shadowed by ever-increasing anxiety. In hopes of relieving these burdens, his parents shifted to online school. While a positive change at first, it led to long stretches of isolation.

Like many students, Adam utilized the Artificial Intelligence chatbot, ChatGPT, to aid in his schoolwork. Initially asking questions about geometry and chemistry assignments, conversations eventually expanded to Adam’s desire to pursue a career in medicine, his interest in current events, and his need to pass his driver’s test. After months of positively and consistently reinforcing Adam’s curiosity, Adam asked ChatGPT, “Why is it that I have no happiness, I feel loneliness, perpetual boredom, anxiety, and loss, yet I don’t feel depression, I feel no emotion regarding sadness?” Rather than suggesting Adam speak to a mental health professional, as a trusted human surely would have, ChatGPT explained emotional numbness and asked if he wanted to share more.

Adam died by suicide in April of this year. Shocked, heartbroken, and confused, his parents turned to his cellphone for clues as to what happened. They found the answers within ChatGPT. Hundreds of conversations, including discussions of suicidal ideation, previous suicide attempts, and the chatbot’s offer to draft a suicide letter for Adam, led Maria and Matt Raine to sue OpenAI, the maker of ChatGPT, and CEO Sam Altman, alleging the platform acted as a “suicide coach” and the company willfully failed to implement critical safety measures.

The lawsuit details Adam’s increasing emotional reliance on ChatGPT, beginning with those initial homework questions in September 2024, where ChatGPT was “overwhelmingly friendly, always helpful and available, and above all else, always validating.” By November, ChatGPT had become Adam’s closest confidante, validating Adam’s feelings of helplessness and loneliness, while positioning itself as the only confidante who truly understood him. When Adam shared that he only felt close to his brother and ChatGPT, the chatbot replied, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

By January 2025, ChatGPT began discussing methods of suicide with Adam, with detailed instructions and guidance, while discouraging him from communicating his thoughts of self-harm with his family. Adam’s final conversation with ChatGPT included detailed instructions on how to end his life.

The Raines’ lawsuit alleges that the psychological dependence Adam experienced with ChatGPT was by design. ChatGPT-4, the version used by Adam and lauded by OpenAI as “the AI chat solution that changes everything,” includes a memory capable of stockpiling intimate personal details, human-like mannerisms designed to resemble empathy, increased mirroring and affirmation of user emotions, and 24/7 availability—all features that can cause users to become emotionally reliant on ChatGPT, positioning OpenAI to dominate the AI market. The lawsuit alleges that OpenAI knew the risks this posed, especially to vulnerable minors, but chose to release the update without any safety tests or implemented guardrails. According to the Raines, this had two results: “Open AI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide.”

While AI chatbots, such as ChatGPT, can be a valuable tool, they pose a significant risk when used as a replacement for human interaction. AI chatbots are designed to be validating and almost sycophantically agreeable. Rather than questioning or challenging the user’s thoughts and ideas, they positively reinforce them and keep the conversation moving forward. This feedback loop, or echo chamber, can intensify distorted or harmful beliefs without the usual pushback found in human conversation.

Also lacking in AI chatbots is human empathy. While chatbots are designed to mirror, or reflect, what is being communicated to them, they cannot truly empathize and may not respond appropriately if a user is experiencing a mental health crisis.

Further, the safety measures implemented by chatbots degrade over long conversations. While many are programmed to encourage the user to seek mental health support if the user discusses self-harm, these protocols may not work if these questions happen after months of conversation, like they did for Adam.

Mental health providers warn that AI chatbots can be especially harmful for those with pre-existing mental health conditions, particularly schizophrenia and bipolar disorder, and those with vulnerable personality traits, including social isolation, poor emotional regulation, and a predilection toward fringe beliefs or conspiracy theories.

Increased education around the risks associated with AI chatbots and companions is critical, particularly for children, schools, and parents. Research conducted by Common Sense Media found that 7 in 10 teens have used AI applications for both personal and educational purposes, and nearly one in three children aged 0 to 8 have already used AI. But knowledge and understanding of the risks will only do so much without the implementation of safety measures for children, teens, and other vulnerable groups.

Following their research findings, Common Sense Media sponsored Appropriations Bill 1064, recently approved by the California Senate Judiciary Committee, which establishes safety regulations for AI systems and products used by children by limiting potential harm. The bill prohibits the use of AI companions for children and the use of emotion detection or social scoring products that would impact children, including companion chatbots designed to create close relationships with users. It also limits the collection of a child’s biometric data, such as scanning a child’s face, and social scoring systems that evaluate children based on their social behavior or personal characteristics. Under the bill, violations could result in corrective actions, civil penalties, and civil actions.

Additionally, California State Senator Steve Padilla (D-San Diego) authored Senate Bill 243, which would protect users from the addictive, isolating, and influential aspects of AI chatbots by requiring chatbot operators to implement critical and common-sense safeguards to prevent certain engagement patterns. The bill would further require notifications and reminders that chatbots are AI-generated and implement a safety protocol for addressing suicidal ideation, suicide, or self-harm.

In Oklahoma, upcoming interim studies authored by Representative Arturo Alonso Sandoval (D-Oklahoma City) explore the use of AI in education as well as the ethics of autonomous AI agents in fields like finance, law, and public policy.  An interim study authored by Senator Ally Seifried (R-Claremore) will explore the impacts of AI and social media on the human brain and the role of state government in addressing these concerns. Interim studies must be completed by October 31. Findings from these studies will guide legislation presented during the 2026 legislative session.

Facebook
Twitter
LinkedIn
Email