
"Generative AI attacks are accelerating at an alarming rate, according to Gartner, with 29% of organizations experiencing an attack on their AI application infrastructure in the last 12 months. In a survey of 302 cybersecurity leaders in North America, EMEA, and Asia-Pacific, the consultancy found that 62% of organizations experienced a deepfake attack involving social engineering or exploiting automated processes. Audio incidents were more common than video, with 44% reporting social engineering during a call with a supposed employee, compared with 36% in the case of video calls."
"Similarly, 32% experienced deepfake audio used against automated voice biometrics, compared with 30% in the case of face biometrics or identity verification. Analysis from the consultancy found AI assistants are now a top target for threat actors, and they're vulnerable to a variety of adversarial prompting techniques. Attack methods highlighted in the study included prompts aimed at manipulating large language models (LLMs) or duping multimodal models into generating malicious outputs. All told, 32% of respondents to the Gartner survey said they'd experienced an attack of this kind over the last year, representing a significant uptick."
""As adoption accelerates, attacks leveraging GenAI for phishing, deepfakes and social engineering have become mainstream, while other threats - such as attacks on GenAI application infrastructure and prompt-based manipulations - are emerging and gaining traction," said Akif Khan, VP analyst at Gartner."
""Rather than making sweeping changes or isolated investments, organizations should strengthen core controls and implement targeted measures for each new risk category," said Khan."
A Gartner survey of 302 cybersecurity leaders across North America, EMEA and Asia-Pacific found rising generative AI threats. Twenty-nine percent of organizations reported attacks on AI application infrastructure in the past year. Sixty-two percent experienced deepfake attacks involving social engineering or automated processes. Audio deepfakes were reported more often than video, including 44% involving fake calls and 32% targeting voice biometrics. AI assistants face adversarial prompting and LLM or multimodal manipulation. Thirty-two percent experienced prompt-based attacks. Sixty-seven percent of leaders say generative AI risks require significant changes; Gartner advises strengthening core controls and targeted measures.
Read at IT Pro
Unable to calculate read time
Collection
[
|
...
]