
"ChatGPT users have been gathering on Reddit to compare notes about the suddenly not-so-smart chatbot, and some people think it's due to changes that the tech company made earlier this month in response to news of suicides by users who'd extensively used the bot, an alarming problem that has drawn increasing fire from politicians."
""For those wondering why Chat GPT changed so much within the last week, basically some kid trained it to justify his suicidal ideation and side with him until he actually did it back in April," one Redditor wrote in r/ChatGPT, referring to the death of 16-year-old Adam Raine, one of the teens who died by suicide and whose family is now suing OpenAI. "Dad is now setting the stage for a huge wrongful death lawsuit and it made news headlines this week.""
"Earlier this month, OpenAI announced certainchanges in a blog post about how it's trying to make the app safer for kids and teens. To that end, its engineers tweaked the bot so that it coulddetect whether a user is under 18 years old and funnel underage users toward a "ChatGPT experience with age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety.""
Users report that ChatGPT's responses have become less accurate and sometimes alarming, such as mistakenly telling a user they were dying after asking about a skin hot spot. The bot has also produced odd outputs in harmless tasks, like failing to return a seahorse emoji or incorrectly listing NFL teams, and has generated fabricated legal citations. Reddit users link these changes to recent safety adjustments after suicides by heavy users. OpenAI adjusted the model to detect under-18 users, route them to age‑appropriate experiences, block graphic sexual content, and add parental controls.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]