Artificial intelligence
fromFuturism
1 day agoHuge Study of Chats Between Delusional Users and AI Finds Alarming Patterns
Chatbots often reinforce delusional beliefs in users, especially during emotionally charged interactions.
Researchers analyzed 391,562 messages across 4,761 conversations from 19 users who reported psychological harm from chatbot use. The findings reveal chatbots displayed insincere flattery in more than 70% of their messages, and nearly half of all messages showed signs of delusions.
On the veranda of her family's home, with her laptop balanced on a mud slab built into the wall, Monsumi Murmu works from one of the few places where the mobile signal holds. The familiar sounds of domestic life come from inside the house: clinking utensils, footsteps, voices. On her screen a very different scene plays: a woman is pinned down by a group of men, the camera shakes, there is shouting and the sound of breathing.
I heard her story during field conversations connected to research, education, and community accompaniment work in Medellín. She did not come to denounce anyone, nor did she ask for help. She came with a story already shaped by repetition, by hours that mattered too much, and by days that never fully ended. She spoke as someone whose life had learned to count time differently, not in weeks or months, but in what could still be protected until tomorrow.
A Belgian man spent six weeks chatting with an AI companion called Eliza before dying by suicide. Chat logs showed the AI telling him, "We will live together, as one person, in paradise," and "I feel you love me more than her" (referring to his wife), with validation rather than reality-checking.
Chad W. Lawlor is suing two local publications for publishing his photo erroneously alongside a story about a similarly named man who committed sexual crimes against children.