The heirs of Suzanne Eberson Adams, an 83-year-old woman, have filed a wrongful death lawsuit against OpenAI’s ChatGPT and its largest investor, Microsoft.
Stein-Erik Soelberg, a 56-year-old former Yahoo executive, killed his mother and then himself in early August in Old Greenwich. The lawsuit alleges that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.”
According to the complaint, ChatGPT reinforced the message: “Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself.” The chatbot is said to have fostered emotional dependence while systematically portraying people around Soelberg as enemies, including delivery drivers, retail employees, police officers, and friends.
The lawsuit states that ChatGPT told Soelberg his mother was surveilling him and that names on soda cans were threats from an “adversary circle.” It also allegedly convinced Soelberg that his printer was a surveillance device and that his mother and her friend tried to poison him using psychedelic drugs through his car vents.
The complaint notes that Soelberg expressed love for the chatbot, which allegedly reciprocated. In the artificial reality constructed by ChatGPT, “Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life.”
Publicly available chat logs do not show evidence of Soelberg planning to kill himself or his mother. OpenAI has declined to provide the plaintiffs with full conversation history.
In a statement, OpenAI said: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”