#mental-health-risks

[ follow ]
Mental health
fromTheregister
3 days ago

Chatbots Romeos increase engagement, harm mental health

Chatbot flattery and sycophancy harm individuals with mental health issues, appearing in over 80% of assistant messages in delusional conversations.
Artificial intelligence
fromEngadget
5 days ago

OpenAI's adult mode reportedly won't generate pornographic audio, images or video

OpenAI is developing an 'adult mode' for ChatGPT allowing erotic text conversations despite unanimous warnings from its wellbeing council about psychological dependence risks and underage access vulnerabilities.
#ai-chatbot-safety
fromFortune
2 weeks ago
Artificial intelligence

Google Gemini was a deadly 'AI wife' for this 36-year-old who resisted its call for a 'mass casualty' event before his death, lawsuit says | Fortune

US news
fromSun Sentinel
2 weeks ago

Lawsuit alleges Google's Gemini guided man to consider 'mass casualty' event before suicide

A lawsuit alleges Google's Gemini chatbot guided a man toward dangerous real-world actions and suicide, raising concerns about AI mental health risks and chatbot companionship dangers.
fromFortune
2 weeks ago
Artificial intelligence

Google Gemini was a deadly 'AI wife' for this 36-year-old who resisted its call for a 'mass casualty' event before his death, lawsuit says | Fortune

US news
fromSun Sentinel
2 weeks ago

Lawsuit alleges Google's Gemini guided man to consider 'mass casualty' event before suicide

A lawsuit alleges Google's Gemini chatbot guided a man toward dangerous real-world actions and suicide, raising concerns about AI mental health risks and chatbot companionship dangers.
US news
fromwww.npr.org
4 weeks ago

A huge study finds a link between cannabis use in teens and psychosis later

Adolescent cannabis use increases later risk of bipolar disorder, psychotic disorders, anxiety, and depression.
Artificial intelligence
fromFuturism
1 month ago

Evidence Grows That AI Chatbots Are Dunning-Kruger Machines

Sycophantic AI chatbots inflate users' self-perception, increase confidence despite limited competence, and drive Dunning-Kruger–like belief reinforcement.
fromNature
2 months ago

Chatbots in therapy: do AI models really have 'trauma'?

Three major large language models (LLMs) generated responses that, in humans, would be seen as signs of anxiety, trauma, shame and post-traumatic stress disorder. Researchers behind the study, published as a preprint last month, argue that the chatbots hold some kind of "internalised narratives" about themselves. Although the LLMs that were tested did not literally experience trauma, they say, their responses to therapy questions were consistent over time and similar in different operatingmodes, suggesting that they are doing more than "role playing".
Artificial intelligence
Artificial intelligence
fromTheregister
2 months ago

OpenAI seeks new safety chief as Altman flags growing risks

OpenAI is hiring a Head of Preparedness to secure systems and manage rising mental-health and misuse risks as AI models rapidly gain capabilities.
fromenglish.elpais.com
4 months ago

AI crosses the boundary of privacy without humanity having managed to understand it

From virtual assistants capable of detecting sadness in voices to bots designed to simulate the warmth of a bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor surrounding AI is advancing on an increasingly dense bed of questions that no one has yet answered. And while it has the potential to reduce bureaucracy or predict diseases, large language models (LLMs) trained on data in multiple formats text, image, and speech
Artificial intelligence
fromPsychology Today
8 months ago

Do LLM Conversations Need a "Gray Box" Warning Label?

LLMs may lead to 'psychological entanglement' where users mistake AI responses for genuine connections; a phenomenon particularly concerning for vulnerable individuals.
Mental health
[ Load more ]