Katharine Jarmul challenged five common AI security and privacy myths in her keynote at InfoQ Dev Summit Munich 2025: that guardrails will protect us, better model performance improves security, risk taxonomies solve problems, one-time red teaming suffices, and the next model version will fix current issues. Jarmul argued that current approaches to AI safety rely too heavily on technical solutions while ignoring fundamental risks, calling for interdisciplinary collaboration and continuous testing rather than one-time fixes.
Allie Miller, for example, recently ranked her go-to LLMs for a variety of tasks but noted, "I'm sure it'll change next week." Why? Because one will get faster or come up with enhanced training in a particular area. What won't change, however, is the grounding these LLMs need in high-value enterprise data, which means, of course, that the real trick isn't keeping up with LLM advances, but figuring out how to put memory to use for AI.
The AI upstart didn't use the attack it found, which would have been an illegal act that would also undermine the company's we-try-harder image. Anthropic can probably also do without $4.6 million, a sum that would vanish as a rounding error amid the billions it's spending. But it could have done so, as described by the company's security scholars. And that's intended to be a warning to anyone who remains blasé about the security implications of increasingly capable AI models.
ServiceNow confirms earlier reports in the media that it wanted to acquire Veza. Veza is an American specialist in identity security. ServiceNow has announced the proposed deal without disclosing any financial details. American media report that, according to sources, the value could exceed $1 billion. With this acquisition, ServiceNow is focusing explicitly on identity security, an area that cybersecurity experts say is playing an increasingly central role in data breaches.
Earlier this week at Microsoft's Ignite conference in San Francisco, the overwhelming onslaught of artificial intelligence-related announcements made it easy to miss some of the company's more significant "all-AI all-the-time" news. Also: Microsoft's new AI agents create your Word, Excel, and PowerPoint projects now The word "Copilot" -- representative of Microsoft's flagship AI brand -- made thousands of appearances across virtually every functional area of the technology firm's offerings, a testimony to an AI-first strategy that also blanketed its portfolio of security-related solutions.
On Monday, a new Model Context Protocol security startup called Runlayer launched out of stealth with $11 million in seed funding from Khosla Ventures' Keith Rabois and Felicis. It was created by third-time founder Andrew Berman (previous companies: baby-monitor maker Nanit and an AI video conferencing tool, Vowel, that sold to Zapier in 2024). In the four months since Runlayer launched its product in stealth, it has signed dozens of customers, including eight unicorns or public companies like Gusto, Rippling, dbt Labs, Instacart, Opendoor, and Ramp, it says.
A host of leading open weight AI models contain serious security vulnerabilities, according to researchers at Cisco. In a new, researchers found these models, which are publicly available and can be downloaded and modified by users based on individual needs, displayed "profound susceptibility to adversarial manipulation" techniques. Cisco evaluated models by a range of firms including: Alibaba (Qwen3-32B) DeepSeek (v3.1) Google (Gemma 3-1B-IT) Meta (Llama 3.3-70B-Instruct) Microsoft (Phi-4) OpenAI (GPT-OSS-20b) Mistral (Large-2).
Following the recent acquisition of Observo AI, SentinelOne is integrating this technology into the Singularity Platform. According to the company, the combination creates the only SIEM on the market with both pre-ingestion analytics and flexible data collection. This is made possible by Observo AI's streaming architecture, which made it an attractive acquisition target for SentinelOne. This speed should enable agentic applications, allowing security work to be largely automated in real time. SentinelOne summarizes all this as an "AI-ready data pipeline."
Apple device management and security company Kandji has changed its name to Iru, reflecting a new approach to what it does while opening its offer up to Windows and Android. It means enterprises shifting to Apple tech can now manage all their legacy equipment using the same console - and benefit from Iru's AI-powered unified IT and security platform introduced on Wednesday.
Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can't match. But realizing that potential depends on securing the systems that make it possible. Every organization experimenting with AI in security operations is, knowingly or not, expanding its attack surface.
Government data is highly segmented by design, often separated by security classification levels to protect sensitive data and operations. While this segmentation is essential for national security, it also presents data-sharing obstacles that must be overcome. Fortunately, Cross-Domain Solutions (CDS) can help overcome obstacles such as safely training AI models with untrusted data, sharing classified AI capabilities with partners and connecting users or systems to AI tools across classification boundaries.
On Monday, Google security engineering managers Jason Parsons and Zak Bennett said in a blog post that the new program, an extension of the tech giant's existing Abuse Vulnerability Reward Program (VRP), will incentivize researchers and bug bounty hunters to focus on "high-impact abuse issues and security vulnerabilities" in Google products and services.
More and more organizations are integrating large language models, generative AI, and autonomous agents into their business processes. While this accelerates innovation, it also creates new security challenges. In a world where data increasingly functions as "executable code," data breaches, model manipulation, and undesirable effects of autonomous decision-making are becoming ever greater threats. Check Point already offers GenAI Protect, SaaS and API security, data loss prevention, and machine learning-driven security. Adding Lakera's technology creates a more complete AI security stack.
The State of Embedded Software Quality and Safety 2025 from Black Duck reveals a disconnect between the organizational use of AI and AI security. The embedded software landscape is transforming, largely driven by AI, with 89.3% of organizations already utilizing AI coding assistants and 96.1% integrating products with open source AI models. However, 21.1% of organizations still lack confidence in their capabilities to prevent AI from opening the door to vulnerabilities.
Human risk management (HRM) specialist KnowBe4 has announced the appointment of Joel Kemmerer as its new chief information officer (CIO). A seasoned IT executive, Kemmerer arrives with more than 30 years' experience from leadership roles across the industry, bringing expertise in digital transformation, integrating acquisitions, and streamlining business operations. As KnowBe4's new CIO, he will play a key role in leading digital transformation initiatives as the vendor looks to continue its global growth journey.
Google Cloud is enhancing security with AI by creating a new integrated security operations center (SOC) that automates workflows for alert triage, investigation, and response.