
"Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data. The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025."
"The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution. With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust."
Ask Gordon, an AI assistant integrated into Docker Desktop and the Docker CLI, contained a critical vulnerability called DockerDash. A single malicious Docker image metadata label could trigger a three-stage attack: Gordon reads the label, forwards the instruction to the MCP (Model Context Protocol) Gateway, and the Gateway executes it via MCP tools. Every stage lacked validation, enabling attackers to bypass security boundaries. Exploitation could yield critical remote code execution on cloud and CLI deployments or high-impact data exfiltration on desktop systems. The root cause was the AI treating unverified metadata as executable instructions; Docker fixed the issue in version 4.50.0 (November 2025).
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]