A PDF that Department of Homeland Security officials provided to New Hampshire governor Kelly Ayotte's office about a new effort to build "mega" detention and processing centers across the United States contains embedded comments and metadata identifying the people who worked on it. The seemingly accidental exposure of the identities of DHS personnel who crafted Immigration and Customs Enforcement's mega detention center plan lands amid widespread public pushback against the expansion of ICE detention centers and the department's brutal immigration enforcement tactics.
On February 3rd, we identified evidence of a problem with our systems that allowed an unauthorized third party to access limited user data without permission, including email addresses, phone numbers, and other internal metadata,
Moltbook emerged following the launch of OpenClaw (previously Clawdbot and Moltbot), an open source, self-hosted AI agent that can autonomously perform a wide range of activities, from executing terminal commands to sending emails. The increasing popularity of OpenClaw led to the creation of ClawHub (MoltHub), a marketplace for OpenClaw skills, and Moltbook, a social network for the AI agents themselves.
Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.
Research analyzing 4,700 leading websites reveals that 64% of third-party applications now access sensitive data without business justification, up from 51% in 2024. Government sector malicious activity spiked from 2% to 12.9%, while 1 in 7 Education sites show active compromise. Specific offenders: Google Tag Manager (8% of violations), Shopify (5%), Facebook Pixel (4%).
In June 2025, researchers uncovered a vulnerability that exposed sensitive Microsoft 365 Copilot data without any user interaction. Unlike conventional breaches that hinge on phishing or user error, this exploit, now known as EchoLeak, bypassed human behavior entirely, silently extracting confidential information by manipulating how Copilot interacts with user data. The incident highlights a sobering reality: Today's security models, which are designed for predictable software systems and application-layer defenses, are ill-equipped to handle the dynamic, interconnected nature of AI infrastructure.
Hardware wallet giant Ledger is grappling with a data exposure incident, this time linked to its third-party payment processor, Global-e. An email notification sent to customers by Global-e and initially shared by pseudonymous blockchain sleuth ZachXBT on X said the breach involved unauthorized access to Ledger users' personal details like names and contact information from Global-e's cloud system. The email did not disclose the number of clients affected or specify when the exploit occurred.
"This took all of 20 minutes," Exempt, a member of the group that carried out the ploy, told WIRED. He claims that his group has been successful in extracting similar information from virtually every major US tech company, including Apple and Amazon, as well as more fringe platforms like video-sharing site Rumble, which is popular with far-right influencers. Exempt shared the information Charter Communications sent to the group with WIRED, and explained that the victim was a "gamer" from New York.
Cloud storage is used by most businesses, with 78% of respondents to a 2024 PwC survey indicating they've adopted cloud across most of their organizations. But many firms are unknowingly opening themselves up to security and data protection risks: sensitive data is being held in 9% of publicly-accessible cloud storage, and 97% of this information is classified as restricted or confidential, according to Tenable's 2025 Cloud Security Risk Report.
Unique links are created when Grok users press a button to share a transcript of their conversation - but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online. A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations. It has led one expert to describe AI chatbots as a "privacy disaster in progress".
Hundreds of thousands of conversations that users had with Elon Musk's xAI chatbot Grok are easily accessible through Google Search, reports Forbes. Whenever a Grok user clicks the "share" button on a conversation with the chatbot, it creates a unique URL that the user can use to share the conversation via email, text or on social media. According to Forbes, those URLs are being indexed by search engines like Google, Bing, and DuckDuckGo, which in turn lets anyone look up those conversations on the web.
The vulnerability, tracked as CVE-2025-3648 (CVSS score: 8.2), has been described as a case of data inference in Now Platform through conditional access control list (ACL) rules.