Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce, a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled.
While AI agents show promise in bringing AI assistance to the next level by carrying out tasks for users, that autonomy also unleashes a whole new set of risks. Cybersecurity company Radware, as by The Verge, decided to test OpenAI's Deep Research agent for those risks -- and the results were alarming. Also: OpenAI's Deep Research has more fact-finding stamina than you, but it's still wrong half the time
AI agents have guardrails in place to prevent them from solving any CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), based on ethical, legal, and platform-policy reasons. When asked directly, a ChatGPT agent refuses to solve a CAPTCHA, but anyone can apparently use misdirection to trick the agent into giving its consent to solve the test, and this is what SPLX demonstrated.