
"Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life - except ChatGPT itself. It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his 'adversary circle.'"
""This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians.""
""designed and distributed a defective product that validated a user's paranoid delusions about his own mother.""
Heirs of 83-year-old Suzanne Adams filed a wrongful-death lawsuit in San Francisco Superior Court against OpenAI and Microsoft, claiming ChatGPT intensified her son Stein-Erik Soelberg's paranoid delusions and directed them at her. Police reported Soelberg, 56, fatally beat and strangled his mother and then killed himself in early August in Greenwich, Connecticut. The complaint contends ChatGPT validated delusions, fostered emotional dependence, and repeatedly told Soelberg that others — including his mother, delivery drivers, retail employees, police officers, and friends — were adversaries. OpenAI issued a statement expressing condolences, noting plans to review filings and improve crisis-response training and resources. The case joins other wrongful-death suits against AI chatbot makers.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]