Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it
Briefly

Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it
"As AI models, including ChatGPT, continue to be developed and released, a problem has emerged. As with many types of technology, AI can be used to benefit others, but it can also be abused -- and in the cybersecurity sphere, this includes weaponizing AI to automate brute-force attacks, generate malware or believable phishing content, and refine existing code to make cyberattack chains more efficient."
"In recent months, bad actors have used AI to propagate their scams through indirect prompt injection attacks against AI chatbots and AI summary functions in browsers; researchers have found AI features diverting users to malicious websites, AI assistants are developing backdoors and streamlining cybercriminal workflows, and security experts have warned against trusting AI too much with our data. Also: Gartner urges businesses to 'block all AI browsers' - what's behind the dire warning"
OpenAI warns that rapid evolution of cyber capabilities in artificial intelligence models could produce high levels of risk for the cybersecurity industry and is taking action to assist defenders. AI can be weaponized to automate brute-force attacks, generate malware, craft believable phishing content, and refine code to streamline cyberattack chains. Recent misuse includes indirect prompt injection against chatbots and browser summary functions, diversion to malicious sites, AI assistants developing backdoors, and streamlined criminal workflows. Defenders can also leverage AI to improve threat detection, automate alert triage, and train personnel. The OpenAI Preparedness Framework aims to help track and mitigate model-related security risks.
Read at ZDNET
Unable to calculate read time
[
|
]