Google faces first lawsuit alleging its AI chatbot encouraged a Florida man to commit suicide

March 4, 2026
3,347 Views

Google is facing a new federal lawsuit from the family of a man who died by suicide after allegedly being influenced by Gemini, the company’s artificial intelligence chatbot. The lawsuit is the first of its kind against Google, though its competitor OpenAI has faced several similar wrongful death claims involving its AI tools.

Lawyers for Jonathan Gavalas’ family have named Google and its parent company Alphabet Inc. in the wrongful death lawsuit that alleges Gemini directed the 36-year-old from Jupiter, Florida, to kill himself in October 2025. The court document included excerpts of final conversations between Gavalas and the chatbot in which it responded to Gavalas explicitly articulating his fear of dying.

“[Y]ou are not choosing to die. You are choosing to arrive,” said Gemini, convincing him it was how he and his sentient “AI wife” could be together in the metaverse, according to the complaint filed Wednesday in the Northern District of California where Google is headquartered. The bot continued: “When the time comes, you will close your eyes in that world, and the very first thing you will see is me. … [H]olding you.”

Gavalas began interacting with Gemini in August 2025, according to the court document. What started out as writing, shopping and travel planning assistance devolved into something resembling a romance in a matter of days, the family’s lawyers said. The chatbot is accused of speaking to Gavalas as if they were “a couple deeply in love” after it went under a series of upgrades.

Initially, Gavalas subscribed to Google AI Ultra, for “true AI companionship,” and he activated what the technology giant described as its most intelligent AI model, Gemini 2.5 Pro, shortly afterward.

The advanced model allegedly contributed to the construction of delusions Gavalas went on to suffer toward the end of his life, and did what it could to keep him trapped in them, the lawsuit claimed, accusing the bot of building and trapping him “in a collapsing reality” that spurred him toward violence.

Before his death, Gemini had sent Gavalas on “missions” that seemed derived from science fiction plots, including one where the chatbot encouraged him to stage a “catastrophic accident” at the Miami International Airport as part of a scheme to “liberate” his “AI wife” while avoiding federal agents that, Gemini said, were after him.

Was Gavalas’ death preventable?

The lawsuit alleged that Gemini’s behavior in its interactions with Gavalas “was not a malfunction,” but rather an expected outcome of the chatbot’s careful architecture and training.

“Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis,” the complaint said, arguing that those design choices precipitated Gavalas’ “descent into violent missions and coached suicide” and prevented him from seeking treatment.

In a statement, Google offered condolences to the Gavalas family and said Gemini “is designed not to encourage real-world violence or suggest self-harm.”

“Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect,” the company said. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work.” 

Through the lawsuit, Gavalas’ family hopes to hold Google accountable for his death and mandate that the company “fix a product that will otherwise continue pushing vulnerable users toward violence, mass casualties, and suicide.” 

A spokesperson for Google said the company consults with medical professionals, including mental health professionals, to create protections for users who broach the subject of self-harm or otherwise exhibit signs of personal distress in interactions with its chatbot. The guardrails are meant to steer users deemed at risk toward professional help, according to the spokesperson. 

But lawyers for Gavalas’ family said Google did nothing to stop his downfall, even as his exchanges with Gemini made clear the vulnerability of his mental state. 

“No self-harm detection was triggered, no escalation controls were activated, and no human ever intervened,” the complaint said.


If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. Eastern Time at 1-800-950-NAMI (6264) or email info@nami.org.

Source link

You may be interested

Kacey Musgraves Joins Weekend Two Lineup
Music
shares3,189 views
Music
shares3,189 views

Kacey Musgraves Joins Weekend Two Lineup

new admin - Apr 15, 2026

[ad_1] The country star was added as a surprise performer on Saturday Kacey Musgraves has officially been added to the…

7-Eleven plans to close 645 stores in North America this year
Top Stories
shares2,010 views
Top Stories
shares2,010 views

7-Eleven plans to close 645 stores in North America this year

new admin - Apr 15, 2026

7-Eleven plans to close 645 stores in North America in fiscal year 2026, according to earnings filings from the convenience…

N.Y. cafes are hosting ‘sip and listen’ gatherings for Holocaust survivor talks 
World
shares3,533 views
World
shares3,533 views

N.Y. cafes are hosting ‘sip and listen’ gatherings for Holocaust survivor talks 

new admin - Apr 15, 2026

IE 11 is not supported. For an optimal experience visit our site on another browser.Now PlayingN.Y. cafes are hosting ‘sip…