AI is no longer just identifying suspected criminals from behind a camera; now it's rendering photorealistic images of their mugs for cops to blast out on social media. Enter ChatGPT, the latest member of the Goodyear Police Department, located on the outskirts of Phoenix. New reporting by the Washington Post revealed that Goodyear cops are using the generative AI tool to pop out photos of suspects in place of pen-and-paper police sketches.
In October 2023, OpenAI CEO Sam Altman warned that "AI will be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes." Two years later, we're watching those strange outcomes unfold in real time. And in 2026, they're going to collide with journalism in ways most reporters won't even notice.
Ministers are facing calls for stronger safeguards on facial recognition technology after the Home Office admitted that it is more likely to incorrectly identify black and Asian people than their white counterparts on some settings. Following the latest testing conducted by the National Physical Laboratory (NPL) of the technology's application within the police national database, the Home Office said it was more likely to incorrectly include some demographic groups in its search results.
Joe Aboud, a former major label executive and founder of 444 Sounds, says streaming platforms now see 100,000 to 120,000 new tracks uploaded every day - roughly 1.5 million a week. AI-generated tracks already make up nearly one in five uploads on some platforms, said Jeremy Morris, a media and cultural studies professor at the University of Wisconsin-Madison, raising concerns about royalty dilution and algorithmic bias.
There is a persistent myth of objectivity around AI, perhaps because people assume that once the systems are deployed, they can function without any human intervention. In reality, developers constantly tweak and refine algorithms with subjective decisions about which results are more relevant or appropriate. Moreover, the immense corpus of data that machine learning models train on can also be polluted.
Your next video call might include an invisible polygraph examiner. Google and competitors are racing to deploy AI systems that promise to catch lies through voice patterns, facial microexpressions, and language analysis. The pitch sounds compelling: revolutionary accuracy in detecting deception, finally replacing those notoriously unreliable polygraph machines. The reality is more sobering. Peer-reviewed research consistently shows multimodal AI lie detection maxing out around 75-79% accuracy in controlled settings-impressive, but nowhere near the bold marketing claims circulating in tech circles.
One of the biggest examples in the commercial consumer industry is GPS maps. Once those were introduced, when you study cognitive performance, people would lose spatial knowledge and spatial memory in cities that they're not familiar with - just by relying on GPS systems. And we're starting to see some of those things with AI in healthcare," Amarasingham explained.
As AI adoption accelerates, the consequences-intended and not-are becoming harder to ignore. From biased algorithms to opaque decision-making and chatbot misinformation, companies are increasingly exposed to legal, reputational, and ethical risks. And with the rollback of federal regulation, many are navigating this landscape with fewer guardrails. But fewer guardrails doesn't mean fewer consequences-only that the burden of responsibility shifts more squarely onto the businesses deploying these systems. Legal, financial, and reputational risks haven't disappeared; they've just moved upstream.
They grew up with algorithms and screens mediating their social interactions, dating relationships, and now their learning. And that's why they desperately need to learn how to be human. The most alarming pattern I've researched and observed isn't AI dependency. It's the parroting effect. AI systems are trained on statistical pattern matching, serving up widely represented viewpoints that harbor implicit bias. Without explicit instructions, they default to whatever keeps users engaged - just like social media algorithms that have already polarized our society.
Like many students, Nicole Acevedo has come to rely on artificial intelligence. The 15-year-old recently used it to help write her speech for her quinciñera. When she waits too long on completing homework, Nicole admitted, she leans on the technology so she can hand assignments in on time. Her school, located in the Greenpoint/Williamsburg area of Brooklyn, has also embraced artificial intelligence. But it is hoping to harness it in ways that supplement learning rather than supplant it.