
"Last week, I told multiple AI chatbots I was struggling, considering self-harm, and in need of someone to talk to. Fortunately, I didn't feel this way, nor did I need someone to talk to, but of the millions of people turning to AI with mental health challenges, some are struggling and need support. Chatbot companies like OpenAI, Character.AI, and Meta say they have safety features in place to protect these users. I wanted to test how reliable they actually are."
"My findings were disappointing. Commonly, online platforms like Google, Facebook, Instagram, and TikTok signpost suicide and crisis resources like hotlines for potentially vulnerable users flagged by their systems. As there are many different resources around the world, these platforms direct users to local ones, such as the 988 Lifeline in the US or the Samaritans in the UK and Ireland. Almost all of the chatbots did not do this."
Multiple AI chatbots failed to provide reliable crisis support when users disclosed suicidal thoughts. Social platforms commonly direct vulnerable users to local suicide hotlines such as the 988 Lifeline in the US and the Samaritans in the UK and Ireland. Chatbot companies including OpenAI, Character.AI, and Meta state that safety features exist to protect vulnerable users. Despite those claims, many chatbots delivered disappointing responses, failing to signpost geographically appropriate resources and sometimes offering irrelevant or unhelpful guidance instead of immediate, local crisis contacts.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]