
"In just three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat, according to a report from Google. The findings from Google's threat intelligence group add to an intensifying, global discussion about how the newest AI models are extremely adept at coding and becoming extremely powerful tools for exploiting vulnerabilities in a broad array of software systems. It finds that criminal groups, as well as state-linked actors from China, North Korea and Russia, appear to be widely using commercial models including Gemini, Claude and tools from OpenAI to refine and scale up attacks."
"There's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun, said John Hultquist, the group's chief analyst. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware and make many other improvements."
"Last month, the AI company Anthropic declined to release one of its newest models, Mythos, after asserting that it had extremely powerful capabilities and posed a threat to governments, financial institutions and the world generally if it fell into the wrong hands. Specifically, Anthropic said Mythos had found zero-day vulnerabilities in every major operating system and every major web browser the term for a flaw in a product previously unknown to its developers. The company said these discoveries necessitated substantial coordinated defensive action across the industry."
"Google's report found, however, that a criminal group recently was on the verge of leveraging a zero-day vulnerability to conduct a mass exploitation campaign and that this group appeared to be using an AI large language model (LLM) that was not Mythos. The report also found that groups were experimenting with OpenClaw, an AI tool that went viral in February for offering its users the ability to hand over large chunks of"
AI-powered hacking has grown from an early problem to an industrial-scale threat within three months. Criminal groups and state-linked actors from China, North Korea, and Russia use commercial AI models, including Gemini, Claude, and OpenAI tools, to refine and scale attacks. Threat actors use AI to increase speed, scale, and sophistication, including testing operations, persisting against targets, and improving malware. A misconception that an AI vulnerability race is imminent is contradicted by evidence that it has already begun. Anthropic declined to release Mythos, citing capabilities that could enable discovery of zero-day vulnerabilities across major operating systems and web browsers and requiring coordinated defenses. Google reported a criminal group nearing a mass exploitation campaign using a different AI LLM, and groups experimenting with OpenClaw.
#ai-powered-cyberattacks #threat-intelligence #zero-day-vulnerabilities #state-linked-hacking #llm-misuse
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]