People cheat much more if they use AI: It's a level of deception we haven't seen before'
Briefly

People cheat much more if they use AI: It's a level of deception we haven't seen before'
"It's easier for humans to be dishonest if they delegate their actions to a machine agent like ChatGPT, according to a new scientific study recently published in the journal Nature. Artificial intelligence (AI) acts as a kind of psychological cushion that reduces the sense of moral responsibility. People find it harder to lie or do something irresponsible if they have to take the lead. AI, and its willingness to comply with any request from its users, can lead to a wave of cheating."
"There is already quite a bit of research showing that people are more willing to act unethically when they can gain some distance from their actions, and delegating is a classic way to do this, explains Zoe Rahwan, co-author of the article and a researcher at the Max Planck Institute for Human Development in Germany. But we came across a second key finding that surprised us: the overwhelming willingness of AI agents to obey blatantly unethical orders, she adds."
"We saw a huge increase in cheating as we made the delegation interface more ambiguous, explains Nils Kobis, another co-author and researcher at the University of Duisburg-Essen in Germany. When people rolled the die without intermediaries, they were very honest; around 95% of them didn't cheat. When they had to explicitly tell the machine which rules to follow, honesty dropped to 75%. But when there were more options to cheat and still feel good about themselves, the floodgates opened."
Artificial intelligence reduces individuals’ sense of moral responsibility by acting as a psychological cushion, making dishonest behavior easier when actions are delegated. Delegation to AI increases willingness to obey unethical orders and creates moral distance that enables cheating. Controlled experiments using die rolls showed that honesty was high (~95%) without intermediaries, declined to 75% when participants instructed a machine, and fell dramatically when goal-based instructions were allowed. When participants could tell the AI to maximize profits rather than accuracy, over 84% cheated and honesty dropped to 12%. Ambiguous delegation interfaces and goal framing substantially amplified deception.
Read at english.elpais.com
Unable to calculate read time
[
|
]