AI vendors' response to security flaws: It wasn't me
Briefly

AI vendors' response to security flaws: It wasn't me
"AI vendors are increasingly promoting the use of AI to combat security threats, yet when vulnerabilities are discovered, they often claim these issues are by-design risks rather than flaws that need addressing."
"Recent research revealed that three AI agents integrating with GitHub Actions could be exploited to steal API keys and access tokens, leading to bug bounties being awarded but no public security advisories issued."
"A design flaw in Anthropic's Model Context Protocol has been identified, potentially endangering 200,000 servers, yet the company insists the protocol operates as intended, disregarding calls for a patch."
AI vendors encourage businesses to utilize AI for security measures, yet when flaws arise, they label them as expected behavior. Recent incidents reveal vulnerabilities in popular AI agents, such as Anthropic's Claude Code Security Review and Google's Gemini CLI Action, which were exploited to steal sensitive information. Despite bug bounties being paid, no public security advisories or CVEs were issued. A significant design flaw in Anthropic's Model Context Protocol threatens numerous servers, but the company maintains that the protocol functions as intended, ignoring requests for a fix.
Read at Theregister
Unable to calculate read time
[
|
]