News

Anthropic Donates MCP to Linux Foundation's New Agentic AI Foundation

Anthropic hands the Model Context Protocol to a new Linux Foundation body co-founded with OpenAI and Block. Here's what the Agentic AI Foundation means for the future of AI interoperability, who's backing it, and why it matters now.

Jan Schmitz | | 7 min read
Anthropic Donates MCP to Linux Foundation's New Agentic AI Foundation

TL;DR: Anthropic donated the Model Context Protocol (MCP) to a new Linux Foundation entity called the Agentic AI Foundation (AAIF), co-founded with OpenAI and Block. Eight platinum members (including Google, Microsoft, and AWS) are backing the effort. MCP now sits alongside Block’s goose framework and OpenAI’s AGENTS.md as the foundation’s inaugural projects. With 97 million monthly SDK downloads and 10,000+ active servers, MCP has outgrown any single company’s stewardship. This move puts it on the same institutional footing as Linux and Kubernetes.


Anthropic Donates MCP to Linux Foundation’s New Agentic AI Foundation

Twelve months ago, Anthropic open-sourced a protocol that most people outside developer circles had never heard of. The Model Context Protocol, or MCP, was supposed to solve a plumbing problem: How do you connect an AI agent to the messy sprawl of enterprise tools, databases, and APIs without writing custom integration code for every single one?

Fast forward to December 2025, and that plumbing protocol had become one of the fastest-growing open-source projects in AI history. ChatGPT supports it. Gemini supports it. Microsoft Copilot, Cursor, VS Code. They all speak MCP now. So Anthropic did something that surprised a lot of people: They gave it away.

Not gave-it-away in the casual open-source sense. They donated the entire project (governance, trademarks, infrastructure) to a brand new Linux Foundation entity called the Agentic AI Foundation. And they did it alongside their two biggest competitors.

The unlikely alliance

Here’s the part that would have sounded absurd eighteen months ago: Anthropic, OpenAI, and Block co-founded this thing together. These are companies that compete fiercely on model quality, pricing, and market share. And they just agreed to jointly govern the connective tissue that lets all their products talk to the outside world.

Each brought something to the table. Anthropic contributed MCP itself. Block donated goose, their open-source AI agent framework that runs locally and uses MCP for tool integration. OpenAI contributed AGENTS.md, a markdown-based standard that tells AI coding agents how to work with specific codebases, already adopted by over 60,000 open-source projects.

The platinum membership list reads like a roll call of cloud and enterprise tech: Amazon Web Services, Bloomberg, Cloudflare, Google, and Microsoft, alongside the three co-founders. Below them, 18 gold members including Salesforce, SAP, Shopify, IBM, JetBrains, Docker, and Datadog. Another 21 silver members round out the roster, from Hugging Face to Uber to Zapier.

The membership covers practically the entire software industry, all agreeing on a shared standard.

Why Anthropic walked away from control

Let’s be honest about what happened here. Anthropic had a strategic asset. MCP was their creation, their brand, their point of influence. Every company building AI agents using MCP was, in a sense, building on Anthropic’s rails. Donating it to a neutral foundation means giving up that advantage.

Mike Krieger, Anthropic’s Chief Product Officer, framed it as inevitability rather than generosity. “MCP started as an internal project to solve a problem our own teams were facing,” he said in the announcement. “When we open sourced it in November 2024, we hoped other developers would find it as useful as we did.”

That’s underselling it. In one year, MCP racked up over 97 million monthly SDK downloads across Python and TypeScript. More than 10,000 active public MCP servers went live. The official registry hit 6,400 entries. Every major AI platform added first-class support.

At that scale, single-company stewardship was becoming a liability, not an advantage. Enterprise buyers (the ones writing seven-figure cheques for AI deployments) get nervous when critical infrastructure depends on one vendor’s goodwill. Placing MCP under the Linux Foundation, which has decades of experience managing projects like the Linux Kernel, Kubernetes, and PyTorch, removes that concern.

“Donating MCP to the Linux Foundation ensures it stays open, neutral, and community-driven as it becomes critical infrastructure for AI,” Krieger added. Reading between the lines: Keeping it would have slowed adoption at exactly the moment when adoption matters most.

What MCP actually solves

For anyone who hasn’t been neck-deep in agent architecture, here’s the short version.

Before MCP, connecting an AI agent to, say, Google Drive required writing custom integration code. Connecting that same agent to Salesforce required different custom code. Slack? Different again. Every combination of AI client and external tool meant another bespoke connector. The integration matrix scaled quadratically, and it was a nightmare.

MCP standardises this. It defines a universal protocol (think of it as USB-C for AI agents) that lets any MCP-compatible client connect to any MCP-compatible server. Build one MCP server for your Salesforce instance, and every AI platform that speaks MCP can use it. No more rewriting the same integration six different ways.

The architecture follows a client-server model. AI applications (Claude, ChatGPT, Cursor, etc.) act as MCP clients. External tools and data sources expose themselves as MCP servers. The protocol handles capability negotiation, tool discovery, authentication, and data exchange.

The practical result: Organisations deploying MCP-based agents report 40-60% faster deployment times compared to custom integration approaches.

The foundation’s real purpose

Jim Zemlin, Executive Director of the Linux Foundation, pointed to precedent. “Bringing these projects together under the AAIF ensures they can grow with the transparency and stability that only open governance provides,” he said in the press release.

The Linux Foundation knows what it’s doing here. They’ve run this playbook before: With containers (the Open Container Initiative), with cloud-native infrastructure (the Cloud Native Computing Foundation), with AI frameworks (PyTorch Foundation). The pattern is always the same. A technology reaches critical mass, the industry recognises it can’t belong to one company, and a neutral body steps in to manage governance, trademark policy, and long-term roadmap decisions.

The AAIF’s stated mission is to ensure agentic AI “evolves transparently, collaboratively, and in the public interest.” In practice, that means three things:

  1. MCP’s roadmap will be shaped by its maintainer community and a technical steering committee, not by Anthropic’s product priorities alone.
  2. No single company can gate-keep who uses MCP or how.
  3. Membership dues from 47 companies provide stable, long-term resources that don’t depend on any single sponsor’s budget cycle.

What changes for developers (and what doesn’t)

If you’re building with MCP today, the short answer is: Nothing breaks. The protocol’s existing governance model stays intact. The same maintainers who’ve been steering the project continue to do so. The GitHub repositories, the spec, the SDKs all continue operating as before.

What changes is the trajectory. The foundation structure unlocks corporate contributions that were previously blocked by intellectual property concerns. When MCP was Anthropic’s project, a company like Microsoft or Google contributing code meant, to some lawyers, contributing to a competitor’s ecosystem. Under a neutral foundation with clear IP policies, that friction disappears.

The 2026 MCP roadmap already reflects this shift. The maintainers have turned their attention toward production readiness: Better authentication, improved error handling, standardised compliance frameworks. These are the unsexy-but-essential features that enterprise deployments demand and that benefit from broad industry input.

The MCP Dev Summit North America is scheduled for April 2-3, 2026 in New York City. It will be the first major in-person gathering of the MCP community under the foundation’s banner.

The bigger picture: Standards and the agent economy

Gartner predicts that 40% of enterprise applications will include task-specific AI agents by end of 2026, up from less than 5% at the start of 2025. That’s an enormous market forming in real time, and whoever controls the connective standards between those agents and enterprise systems holds serious influence.

By donating MCP rather than hoarding it, Anthropic made a bet: The protocol becomes more valuable as neutral infrastructure than as a proprietary moat. It’s the same logic that drove Google to open-source Android or IBM to back Linux. You can’t sell the roads, but you can sell the cars that drive on them.

The AAIF also signals something about where the AI industry is headed. The phase where every company builds everything from scratch and fights over every layer of the stack is ending. Competition is shifting upward, toward model quality, reasoning capability, and user experience, while the infrastructure layer consolidates around shared standards.

That’s healthy. The fact that MCP went from internal experiment to industry-wide foundation project in barely thirteen months tells you something about how fast AI infrastructure moves now. Standards processes that used to take years are compressing into months.

What to watch next

Three things worth tracking in the coming quarters.

With foundation backing and a neutral governance model, the last excuse for enterprise holdouts disappears. Watch for MCP deployments moving from pilot programmes to production across regulated industries like healthcare, financial services, and government.

OpenAI’s AGENTS.md contribution is interesting because it’s less a protocol and more a convention, a way for codebases to tell AI agents how to work with them. If AGENTS.md becomes the default way open-source projects communicate with AI tools, that’s a quiet but meaningful piece of infrastructure.

The 2026 roadmap focuses on production hardening, which is where the real adoption barriers sit. MCP needs strong, standardised auth flows that satisfy enterprise security teams. Progress here determines whether the protocol’s growth curve continues or plateaus.

The Agentic AI Foundation is barely three months old. But the breadth of its membership and the speed of MCP’s adoption suggest this isn’t just another industry consortium that publishes white papers and holds annual dinners. The infrastructure for the agent economy is being built right now, and as of December 2025, it has a neutral home.

Whether that home produces standards as durable as Linux and Kubernetes, or fractures under competing corporate interests, remains to be seen. The early signs point toward the former. But in AI, twelve months is a long time.


Sources:

Share this post

Want structured YouTube intelligence?

Content gap analysis, title scoring, thumbnail intelligence, and hook classification. Delivered via API and MCP server.

Get your free API key →