
"The adoption of AI tools like ChatGPT and Gemini is outpacing efforts to teach users about the cybersecurity risks posed by the technology, a new study has found. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses The study, conducted by the National Cybersecurity Alliance (NCA) -- a nonprofit focused on data privacy and online safety -- and cybersecurity software company CybNet was based on a survey of more than 6,500 people across seven countries, including the United States. Well over half (65%) of respondents said they now use AI in their daily life, marking a year-over-year increase of 21%."
"An almost equal number (58%) reported that they've received no training from their employers regarding the data security and privacy risks that come with using popular AI tools. "People are embracing AI in their personal and professional lives faster than they are being educated on its risks," Lisa Plaggemier, Executive Director at the NCA, said in a statement. Also: How Microsoft Sentinel is tackling the AI cybersecurity era On top of that, 43% admitted they had shared sensitive documentation in their conversations with AI tools, including company financial data and client data."
Adoption of generative AI tools has surged, with 65% of surveyed respondents using AI in daily life, a 21% year-over-year increase. A survey of more than 6,500 people across seven countries found 58% received no employer training on the data security and privacy risks of popular AI tools. Forty-three percent admitted sharing sensitive documentation in AI conversations, including company financial and client data. Chatbots and AI agents introduce new risks to data security and privacy. Efforts to train employees on safe, responsible AI use remain insufficient despite growing AI ubiquity.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]