
""We expect that upcoming AI models will continue on this trajectory,""
""In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework.""
""What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time,""
""In any defended environment this would be caught pretty easily,""
OpenAI reports rapid increases in model cybersecurity capabilities, with longer autonomous operation enabling brute-force style attacks. GPT-5 scored 27% on an August capture-the-flag exercise, while GPT-5.1-Codex-Max scored 76% more recently. OpenAI expects models to continue this trajectory and is preparing as though each new model could reach "high" cybersecurity capability under its Preparedness Framework. Prior warnings included bioweapons risk, and ChatGPT Agent was rated "high" for risk after release. Fouad Matin identified extended continuous operation as the key enabling factor for brute-force attacks, which are easier to detect in defended environments. OpenAI is increasing industry coordination and establishing a Frontier Risk Council.
Read at Axios
Unable to calculate read time
Collection
[
|
...
]