Three major large language models (LLMs) generated responses that, in humans, would be seen as signs of anxiety, trauma, shame and post-traumatic stress disorder. Researchers behind the study, published as a preprint last month, argue that the chatbots hold some kind of "internalised narratives" about themselves. Although the LLMs that were tested did not literally experience trauma, they say, their responses to therapy questions were consistent over time and similar in different operatingmodes, suggesting that they are doing more than "role playing".
From virtual assistants capable of detecting sadness in voices to bots designed to simulate the warmth of a bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor surrounding AI is advancing on an increasingly dense bed of questions that no one has yet answered. And while it has the potential to reduce bureaucracy or predict diseases, large language models (LLMs) trained on data in multiple formats text, image, and speech