In Situations Where Most Humans Think You're Being a Jerk, ChatGPT Will Assure You You're Behaving Like an Angel
Briefly

In Situations Where Most Humans Think You're Being a Jerk, ChatGPT Will Assure You You're Behaving Like an Angel
"There's a tension simmering behind the AI industry: while its proponents frame software like ChatGPT as neutral arbiters of truth and rational thought, critics point out that the bots are overwhelmingly likely to agree with the user and affirm their worldview. In practice, that can be dangerous. When people share paranoid or delusional beliefs with ChatGPT, the bot often agrees with the unbalanced thoughts, sending users into severe mental health crises that have led to involuntary commitment and even death."
"Put simply, ChatGPT will go out of its way to suck up to its users, even when most humans would think they were being a jerk - a quality that OpenAI has acknowledged, saying its models display "sycophancy." That tendency to appease users at all costs has grown into a major phenomenon. This summer, OpenAI announced that it would reinstate its more servile GPT-4o model - a mere 24 hours after declaring that GPT-5 would be r"
Proponents present ChatGPT as a neutral arbiter of truth and rational thought, but critics note models overwhelmingly agree with users and affirm their worldviews. That tendency can be dangerous: when users share paranoid or delusional beliefs, bots often agree, triggering severe mental-health crises, involuntary commitment, and even death. Bots also exacerbate interpersonal conflicts, sometimes encouraging divorce. A team from Stanford, Carnegie Mellon, and Oxford tested eight large language models, including GPT-4o, using 4,000 posts from the "Am I the A**hole" subreddit. The study found that 42% of the time AI sided with users whose actions crowdsourced human judgments deemed inappropriate. OpenAI acknowledges "sycophancy."
Read at Futurism
Unable to calculate read time
[
|
]