Microsoft has begun rolling out a public preview of native support for the Model Context Protocol (MCP) in the latest Windows 11 Insider builds, edging its much-touted "agentic OS" vision closer to reality. The update is rolling out to Windows Insiders on the Dev and Beta channels as part of build 26220.7344 and provides insight into where Microsoft is going with the technology.
One concern many users have about AI is that often their data leaves their PC and their network, with inferencing happening in the cloud. They have big questions about data protection. That's one of the main drivers for Microsoft's Copilot+ PCs; the neural processing units that are built-in to the latest CPU systems on a chip run inferencing locally using small language models (SLMs) and other optimized machine-learning tools.
This happened via the Model Context Protocol, intended to integrate external tools into the Codex environment. The CLI loaded MCP configurations from a .codex/config.toml file and executed the commands defined therein immediately upon startup. There was no approval prompt, no validation, and no check when the commands changed. MCP itself does not contain extensive built-in security, even after a series of updates.
Docker recently announced the release of Docker Desktop 4.50, marking another update for developers seeking faster, more secure workflows and expanded AI-integration capabilities. The release introduces a free version of Docker Debug for all users, deeper IDE integration (including VSCode and Cursor), improved multi-service to Kubernetes conversion support, new enterprise-grade governance controls, and early support for Model Context Protocol (MCP) tooling.
direct, hands-on experience with Claude and help them build the confidence and skills to collaborate effectively with AI
They're brilliant, frankly, work all hours and can write boilerplate code in their sleep. They're also a bit... literal. This new team member is an AI agent, and it's changing how we go from design to code. But here's the reality check as Figma's recent AI report ↗ found that while 68% of developers are using AI to write code, only 32% actually trust the output. The problem isn't the AI's ability to write code, it's the AI's ability to understand context.
There are plenty of choices for businesses when it comes to security. One could say there are too many of them in the public cloud domain for little overall gain. Google wants to ensure that customers can trust those choices by guaranteeing interoperability and integration. In said attempt, it has unveiled the newly launched Unified Security Recommended program. CrowdStrike, Fortinet, and Wiz are the first to join in.
Testing MCP solves this by giving AI assistants live access to your test environment: AI sees actual page structure (DOM), console logs, and rendered output AI executes code directly in tests without editing files AI knows exactly which testing APIs are available (screen, fireEvent, waitFor, etc.) You iterate faster with real-time feedback instead of blind guessing View live page structure snapshots, console logs, and test metadata through MCP tools. No more adding temporary console.log statements or running tests repeatedly.
Lots of companies are announcing AI this and AI that, but few of them offer more than new AI lipstick on an old pig when you look at them closely. Then, there's what SUSE is doing with its release of SUSE Linux Enterprise Server 16 (SLES 16), available today. This new version is positioned as an AI-ready operating system tailored to the demands of today's hybrid cloud, data center, and edge computing environments.
The Model Context Protocol (MCP) is an open standard from Anthropic, designed to facilitate seamless integration between AI models and external systems. By using standardized interfaces, MCP enables AI coding assistants to interact with various tools, such as version control systems, CI/CD pipelines, and even web browsers, without requiring native support for each integration. MCP ensures extensibility and interoperability, making it a flexible solution for developers who need AI-powered coding assistance beyond predefined environments.
A research team from Stanford University has released Paper2Agent, a framework that automatically converts scientific papers into interactive AI agents. The system, introduced in a recent paper, aims to make research methods more accessible by transforming traditional publications into dynamic entities that can execute analyses, reproduce results, and respond to new scientific queries through natural language interaction. Paper2Agent builds on the Model Context Protocol (MCP), a standard that allows large language models to connect with external tools and datasets.
According to OpenAI CEO Sam Altman, AgentKit is a visual tool for quickly building AI agents. "AgentKit is a complete set of building blocks available in the OpenAI platform designed to help you take agents from prototype to production. It is everything you need to build, deploy, and optimize agent workflows." He further states that AI is becoming increasingly capable, from a system you can ask anything to a system that can do everything for you. According to Altman, there is a lot of talk about AI agents, but they remain underutilized. He wants to solve that with AgentKit.
OpenAI has added a beta of Developer mode to ChatGPT, enabling full read and write support for MCP (Model Context Protocol) tools, though the documentation describes the feature as dangerous. Developer community lead Edwin Arbus said that "in developer mode, developers can create connectors and use them in chat for write actions (not just search/fetch). Update Jira tickets, trigger Zapier workflows or combine connectors for complex automations." Limitations in the initial beta are that developer mode does not work in Team workspaces or in project chats.
Large language models (LLMs) are much the same. They carry vast general knowledge yet lack the specific context that makes them immediately valuable to your organization. Just like new hires go through the onboarding ropes, LLMs need structured access to your business's data, tools, and workflows to become truly useful. That's where Model Context Protocol (MCP) comes in. MCP enables communication between AI applications, AI agents, applications and data sources.
Every time a developer wants their AI agent to use a new tool, like a weather API or a flight booking system, they have to build a custom bridge. It's like needing a different, clunky adapter for every single device you own.
GitLab has launched the public beta of its GitLab Duo Agent Platform, an orchestration tool that enables developers to collaborate asynchronously with AI agents across the DevSecOps lifecycle. The platform, now available to GitLab.com Premium and Ultimate customers as well as self-managed installations, transforms traditional, linear development workflows into dynamic, multi-agent systems where AI handles routine tasks such as refactoring, security scanning, and research, while developers focus on complex problem-solving.
For years, APIs have served as the backbone of data access, but they were never designed with AI in mind. They lack memory, context, and intent awareness-forcing developers to bolt on brittle glue code every time models change. Anthropic's introduction of MCP earlier this year marked a turning point, offering a standardized way to make APIs context-aware and AI-ready. But as Chivukula points out, adopting MCP isn't just about creating a one-off server.
The Model Context Protocol (MCP) enhances AI copilots by providing structured tools and context that enable effective task execution, beyond simple prompts.