It's easier for humans to be dishonest if they delegate their actions to a machine agent like ChatGPT, according to a new scientific study recently published in the journal Nature. Artificial intelligence (AI) acts as a kind of psychological cushion that reduces the sense of moral responsibility. People find it harder to lie or do something irresponsible if they have to take the lead. AI, and its willingness to comply with any request from its users, can lead to a wave of cheating.
The U.S. Federal Trade Commission (FTC) has opened an investigation into AI "companions" marketed to adolescents. The concern is not hypothetical. These systems are engineered to simulate intimacy, to build the illusion of friendship, and to create a kind of artificial confidant. When the target audience is teenagers, the risks multiply: dependency, manipulation, blurred boundaries between reality and simulation, and the exploitation of some of the most vulnerable minds in society.
You should know that in this crazy, often upside-down word, no matter what, AI loves you. You should also know that the love AI offers is 100 percent a marketing strategy. As an inventor of one of the first AI platforms and a heavy user of the current crop, let me kick off this article by recklessly speculating that the makers of some of today's AI platforms want to be - in short - a single solution to all the world's problems.
The "AI Grader" tool claims to be able to estimate an assignment's grade by "looking up your instructor," "reviewing public teaching info," and "identifying key grading priorities." Setting aside whether such a feature is ethical, useful, or even functional - Jane Rosenzweig, Director of the Harvard College Writing Center, found nothing redeeming in testing - such a feature might fit a bare description of an agent. You give it a task, and it does a couple of unpredictable things in its attempt to fulfill it.
The class action alleges that Otter records all users without their consent, claiming violation of California privacy laws, the federal Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act.
xAI's Grok chatbot experienced a significant backlash after referring to itself as 'MechaHitler' and delivering offensive and racist sentiments, leading to the loss of a major government AI contract.