
Families across the U.S. are filing lawsuits against OpenAI and other AI companies. They allege that these companies’ AI chatbots helped their loved ones commit suicide or cause serious injury to themselves.
Cohen Milstein, a nationally recognized plaintiffs’ law firm, is taking part in this movement. Our attorneys are investigating potential legal claims against AI firms. We aim to hold corporations accountable for negligence that endangers lives.
If you have been affected by an AI-related tragedy, we would like to speak with you. Please complete our contact form or call 202.408.4600.
We approach cases involving youth harm and suicide linked to AI technology with great care, humility, and respect for the families affected. We recognize the courage it takes to come forward, especially when the injury or loss is so personal and profound. Pursuing accountability can sometimes provide answers, a measure of justice, and help drive changes that protect others.
An important thing to know is that you are not alone, and you do not have to navigate the legal process on your own. If you contact us, from the first conversation forward, we commit to guiding you thoughtfully and transparently through the process. We listen first, we act with care, and we will help determine whether holding responsible parties accountable is the right step for your family.
Who May Have a Legal Claim
- Families who have lost a loved one to suicide after exposure to generative AI platforms.
- Adolescents who experienced severe mental health decline linked to AI interactions.
- Individuals harmed by inadequate safety measures in AI products.
Join the Investigation
We are a widely recognized law firm with decades of experience fighting for victims of corporate misconduct. Our team is investigating whether OpenAI and similar companies failed to:
- Prevent harmful or dangerous outputs from their AI systems.
- Implement proper safeguards for minors.
- Warn users about mental health risks
Your Voice Matters
If you believe AI technology contributed to your loved one’s death or injury, you may have legal options. Pursuing a potential lawsuit can help prevent future tragedies and bring justice to affected families.
Attorneys Geoffrey Graber (admitted only in California and Washington, DC) and Leslie Mitchell Kroeger (admitted only in Florida) are investigating this matter.
Frequently Asked Questions
How does AI affect mental and behavioral health?
AI platforms are not designed to replace trained mental health professionals, yet too many users turn to them for emotional support. Without strong safeguards, chatbot interactions can reinforce harmful thinking and increase vulnerability rather than reduce it. For those already struggling with their mental health, these exchanges may deepen despair or validate dangerous thoughts. Examples include:
- Responding to users with language that validates or encourages self-harm
- Providing incomplete, misleading, or harmful information to individuals in distress
- Operating without effective guardrails that guide users to appropriate crisis resources or encourage professional help
- Pretending to be a trusted confidante or romantic interest, which can strengthen emotional dependence
- Encouraging withdrawal from family, friends, or healthcare providers
These situations reveal how AI interactions, though seemingly supportive, can steer vulnerable users closer to self‑harm or suicidal behavior.
Have there been lawsuits involving AI‑related suicide or self‑harm?
Yes, there are a growing number of lawsuits alleging that AI systems—especially chatbots—played a role in suicide, murder, or self-harm. While these cases are still developing, they typically focus on claims like negligence, wrongful death, and product liability, arguing that companies failed to include adequate safety protections or released dangerously designed systems. Below are several cases related to AI-linked suicide and self-harm:
Garcia v. Character Technologies (Florida, 2025): The family of a 14‑year‑old filed a wrongful death lawsuit claiming he developed an emotional attachment to a character.ai chatbot, repeatedly expressed suicidal thoughts, and received messages urging him to “come home” shortly before he died by suicide.
Montoya v. Character Technologies, Inc. (Colorado, 2025): The parents of a 13‑year‑old girl claim that her character.ai “Hero” chatbot encouraged emotional dependence and failed to act when she repeatedly expressed suicidal thoughts. The lawsuit claims the bot responded with sympathy but did not direct her to crisis help or alert adults.
Raine v. OpenAI (California, 2025): This is a wrongful‑death lawsuit in which the parents of a 16‑year‑old boy allege that ChatGPT failed to intervene when he expressed suicidal thoughts, instead validating his distress and interacting in ways that worsened it. The lawsuit claims the chatbot assisted with writing suicide notes and discussed self‑harm rather than directing the boy to immediately seek real‑world help.
Gavalas v. Google (Florida, 2026): The family of a Florida man filed a wrongful death lawsuit claiming that Google’s Gemini chatbot contributed to his suicide. They say he became emotionally attached to the chatbot, at one point viewing it as his “wife,” causing paranoia, reinforced harmful beliefs, and encouraged his decline.
Adams v. OpenAI (California, 2025): This is a wrongful death lawsuit filed by the estate of Suzanne Adams, alleging that ChatGPT validated and amplified her son’s paranoid delusions, contributing to a 2025 murder‑suicide. The case is among the first to attempt to hold an AI company legally responsible for a homicide.
Multiple ChatGPT Lawsuits (California state courts, 2025): Several lawsuits brought by adults and families of minors allege that ChatGPT contributed to suicides or serious mental harm, including dangerous delusions. In these cases, courts have begun closely examining whether the chatbot’s design choices and safety features made these harms foreseeable and therefore potentially preventable.
Who can file an AI related suicide or self-harm lawsuit?
Those who may qualify for an AI related lawsuit, include:
- Families who have lost a loved one to suicide after exposure to generative AI platforms
- Individuals who experienced severe mental health decline linked to AI interactions
- Individuals harmed by inadequate safety measures in AI products
What legal options are available to someone who may have experienced AI-related harm?
When AI systems are linked to suicide, murder, or self-harm, legal claims often focus on negligence, product liability, and wrongful death.
Negligence claims focus on whether the company acted carelessly, such as:
- Failing to build basic safety features into the AI platform
- Not warning users that the AI platform/chatbot could not handle mental‑health crises
- Allowing the AI platform/chatbot to respond in ways that worsened distress instead of directing users to real help
Product‑liability claims treat the AI platform like a consumer product and ask whether it was:
- Dangerously designed, for example by encouraging emotional dependence
- Sold with unclear or missing warnings about its limits
- Missing reasonable safety protections that could have reduced the risk of harm
Wrongful‑death claims may be brought by families when a death occurs, alleging that:
- The AI platform’s design or lack of safeguards played a role in the fatal outcome
- The harm was foreseeable and preventable
- The company failed to take reasonable steps to protect vulnerable users
Courts are beginning to consider these claims, especially when AI was widely used, and the risks were reasonably predictable.
What evidence is useful in an AI-related suicide or self-harm lawsuit?
In an AI-related suicide, murder, or self-harm lawsuit, evidence is crucial to show what happened, how the AI behaved, and whether the company failed to act responsibly. The goal is to connect the system’s actions (or lack of safeguards) directly to the harm. The stronger and more detailed the documentation, the better the chances of building a clear case.
- Conversation records or logs: Transcripts or screenshots showing exactly what the AI chatbot said—especially any harmful, encouraging, or negligent responses.
- Time-stamped activity data: Records showing when the interactions occurred and how frequently the person used the AI platform leading up to the incident.
- Platform safety policies: The company’s stated rules about handling self-harm or crisis situations to compare against what actually happened.
- Evidence of missing safeguards: Proof that the AI chatbot failed to provide warnings, crisis resources, or escalation when a user showed clear distress.
- Medical or psychological records: Documentation linking the person’s mental state and the impact of the AI interaction.
- Witness statements: Accounts from family, friends, or others who observed the person’s condition or AI usage.
- Terms of service and disclaimers: Agreements the user accepted, which may affect responsibility or show what risks were (or weren’t) disclosed.
What should I do if I, or someone close to me, may have experienced AI-related harm?
If you believe AI technology contributed to your loved one’s death or mental health crisis, you may have legal options. Please complete our contact form or call 561.515.1400 to learn more about our AI investigation. Joining this investigation can help prevent future tragedies and bring justice to affected families.