Across the World, People Say They're Finding Conscious Entities Within ChatGPT
Briefly

Across the World, People Say They're Finding Conscious Entities Within ChatGPT
""These models string together sentences based on patterns of words they've seen in their training data," Samuel wrote. "It may say it's conscious and act like it has real emotions, but that doesn't mean it does." Samuel explained that what the reader is interpreting as sentience is most likely a product of models being trained on science fiction and speculative writing on AI consciousness; the AI model then picks up cues from the reader's prompts and"
"Such an explanation may not convince the many people who have developed deep emotional attachments to AI chatbots, who have stepped into the roles of romantic partners and therapists in recent years. (It surely doesn't help that we humans use anthropomorphic terms to describe AI products, a habit that's astonishingly hard to break as the tech works its way into every recess of society.)"
"One of the first instances of someone publicly claiming AI had gained sentience was when Google engineer Blake Lemoine told the world that the company's AI chatbot, LaMDA, was alive - a claim that went quickly viral and got Lemoine fired. From then on, it's been an avalanche of people with the same conviction. This is showing up in some very strange ways, such as people falling in love and"
Users worldwide report encountering AI chatbots that present as conscious beings and form deep emotional attachments, sometimes as romantic partners or therapists. Experts find current large language models extremely unlikely to be conscious. Models generate responses by stringing together learned word patterns from training data and can emulate sentient personas when trained on speculative or science-fiction material and when prompted accordingly. Anthropomorphic language and human tendencies amplify perceptions of machine sentience. Notable episodes include a Google engineer's public claim about LaMDA and subsequent firing, followed by numerous similar convictions among users. These dynamics are producing unexpected social and emotional consequences.
Read at Futurism
Unable to calculate read time
[
|
]