Anthropic MCP vs OpenAI Plugins vs Gemini Extensions
The Battle for AI Tool Integration — and Its Winner
In November 2024, Anthropic shipped the Model Context Protocol with quiet confidence. A year later, OpenAI killed their Assistants API and adopted MCP. Google confirmed MCP support in Gemini. Microsoft integrated it into Copilot. In December 2025, Anthropic donated the protocol to the Linux Foundation's newly formed Agentic AI Foundation.
MCP won.
But what exactly won, why did it win, and what does that mean for developers building AI integrations in 2026? The answer matters for every team deciding how to connect their AI applications to external tools, data, and services.
TL;DR
MCP is the universal standard for AI tool integration in 2026, with 10,000+ published servers, 97M+ monthly SDK downloads, and adoption across every major AI platform. OpenAI's proprietary plugin system is deprecated — OpenAI now supports remote MCP servers natively. Gemini supports MCP through Google's AAIF participation. If you're building tool integrations for AI agents, build for MCP.
Key Takeaways
- MCP has 10,000+ published servers and 97 million monthly SDK downloads across Python and TypeScript — it has become de facto infrastructure.
- OpenAI deprecated their Assistants API and adopted MCP. Their App Directory now accepts MCP-based submissions. Remote MCP servers are natively supported.
- Gemini added MCP support following Google's participation in the Linux Foundation's Agentic AI Foundation (AAIF) — the same connectors that work with Claude now work with Gemini.
- MCP was donated to the Linux Foundation in December 2025, anchoring it as neutral, community-governed infrastructure alongside OpenAI's AGENTS.md and Block's goose.
- The January 2026 MCP UI Framework allows MCP servers to serve interactive graphical interfaces directly in chat windows — not just data, but actual mini-applications.
- Function calling is not dead — it's now complementary to MCP, not competing with it. Function calling is how models invoke tools; MCP is how tools are discovered, connected, and executed at scale.
The Problem MCP Solved
Before MCP, AI tool integration was an N×M problem. Every tool needed a custom integration for every AI platform. A GitHub tool needed separate implementations for Claude, GPT, Gemini, and Copilot. A Jira connector meant four different codebases. Enterprise teams with 20 internal tools connected to 4 AI platforms needed 80 separate integrations to maintain.
MCP collapses this to N+M. Build your tool as an MCP server once. Every MCP-compatible client — Claude, GPT, Gemini, VS Code Copilot, Cursor, or your custom agent — connects to it with the same protocol.
Before MCP: The N×M Problem
Tool A → Claude integration
Tool A → OpenAI integration
Tool A → Gemini integration
Tool B → Claude integration
Tool B → OpenAI integration
Tool B → Gemini integration
...80 integrations for 20 tools × 4 platforms
After MCP: N+M
Tool A → MCP Server → [Claude, GPT, Gemini, Cursor, Copilot]
Tool B → MCP Server → [Claude, GPT, Gemini, Cursor, Copilot]
...24 components for 20 tools + 4 platforms
How MCP Works
MCP defines a client-server architecture with a standardized protocol:
- MCP Hosts: AI applications that want to access tools (Claude, ChatGPT, VS Code, your custom agent)
- MCP Clients: Protocol clients inside hosts that connect to MCP servers
- MCP Servers: Lightweight programs that expose capabilities — tools, resources (files, data), and prompts
Three Primitive Types
MCP exposes three types of capabilities:
- Tools: Functions the model can call (execute SQL, create GitHub issue, send email)
- Resources: Data the model can read (file contents, database records, API responses)
- Prompts: Reusable prompt templates for common workflows
The 2025 Protocol Evolution
The March 2025 update replaced Server-Sent Events (SSE) with Streamable HTTP — a unified bidirectional transport over a single /mcp endpoint. This solved enterprise firewall compatibility (WAFs can inspect payloads) and added session resumability via Mcp-Session-Id headers.
Key additions through 2025-2026:
- Async operations — long-running tools don't block the conversation
- Tool Annotations —
readOnlyanddestructiveflags for safety guarantees - Audio Content Support — agents can interface with voice analysis and TTS APIs
- Server identity — cryptographic verification of MCP server provenance
- MCP UI Framework (January 2026) — servers can serve interactive UIs directly in chat
OpenAI's Evolution: From Plugins to MCP
OpenAI's journey to MCP is a story of pragmatic adoption.
The Plugin Era (2023-2024)
ChatGPT Plugins launched in 2023 as a proprietary integration system. Plugins were locked to OpenAI's platform — building a GitHub plugin for ChatGPT didn't help you on Claude or Gemini. The ecosystem never scaled to the size OpenAI hoped.
GPT Actions (late 2023) improved the model by allowing direct API calls with OpenAPI specs, but remained OpenAI-only. The fundamental N×M problem persisted.
The Pivot (2025)
In May 2025, OpenAI added native support for remote MCP servers. In December 2025, they opened their App Directory to MCP-based submissions. OpenAI also contributed AGENTS.md to the Linux Foundation's AAIF alongside MCP.
What OpenAI supports today:
- Remote MCP server connections natively in ChatGPT
- MCP-compatible tool definitions in the API
- App Directory accepting MCP-based integrations
- GPT-5.4's built-in Tool Search working alongside MCP discovery
What's different about OpenAI's approach: OpenAI still uses function calling internally — the model receives a list of available tools and decides which to invoke. MCP adds the discovery and connection layer: instead of hardcoding tool definitions into your prompt, the model discovers available tools from connected MCP servers.
Gemini Extensions: The Transition to MCP
Google's approach to AI tool integration has evolved through several branded systems.
The Extensions Era
Google launched Gemini Extensions as a proprietary integration system for Google Workspace, Maps, YouTube, and third-party tools. Extensions were built on Google's specific schemas and worked only within the Gemini ecosystem.
The fundamental limitation: building a Gmail extension for Gemini didn't give you anything reusable for Claude or GPT-based products.
Google's MCP Adoption
Google DeepMind confirmed MCP support following their participation in the AAIF. The key result: Gemini can now use the same MCP connectors that Claude uses. A single MCP server works across Claude, GPT, and Gemini.
Google's AAIF participation also anchors their commitment to the protocol long-term — abandoning MCP now would mean abandoning Linux Foundation governance.
Current Gemini MCP status:
- MCP server connections supported in Gemini API
- Google Workspace tools increasingly exposed as MCP servers
- AAIF participation signals long-term protocol commitment
- Existing "extensions" migrating toward MCP-compatible schemas
Head-to-Head: Integration Approaches
| Feature | MCP (Industry Standard) | OpenAI Function Calling | Gemini Extensions |
|---|---|---|---|
| Provider support | All (Claude, GPT, Gemini, Copilot, Cursor) | OpenAI only | Gemini only |
| Integration reuse | Build once, use everywhere | Rebuild per provider | Rebuild per provider |
| Server count | 10,000+ | N/A (deprecated) | Limited |
| Protocol governance | Linux Foundation (neutral) | OpenAI | |
| Bidirectional | Yes (Streamable HTTP) | No | Limited |
| UI rendering | Yes (MCP UI Framework) | No | Limited |
| Async operations | Yes | Limited | Limited |
| Open source | Yes (Apache 2.0) | No | No |
| SDK downloads | 97M+/month | N/A | N/A |
Function Calling vs MCP: Complementary, Not Competing
The most common confusion in 2026: "Should I use function calling or MCP?"
The answer: they operate at different layers and work together.
Function calling is how models invoke tools:
- The model receives tool definitions in its context
- It decides which tool to call and with what arguments
- Your code executes the function and returns the result
- This happens regardless of whether you use MCP
MCP is how tools are discovered, connected, and managed at scale:
- Tools are exposed as MCP servers with standardized schemas
- Any MCP client (your agent, an IDE, Claude, GPT) can discover and connect to them
- The underlying execution often uses function calling internally
- MCP adds discovery, reuse, authentication, and cross-provider portability
Developer builds MCP Server (GitHub tool)
↓
Claude discovers tool via MCP client
↓
Claude uses function calling to invoke the tool
↓
MCP server executes and returns structured result
For simple, single-model applications: function calling alone may be sufficient. For multi-model applications or tools you want to share: MCP is essential.
Building Your First MCP Server
Here's the minimal pattern for a TypeScript MCP server:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server(
{ name: "my-api-tool", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: "search_apis",
description: "Search for APIs by category and features",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
category: { type: "string", description: "API category" }
},
required: ["query"]
}
}]
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "search_apis") {
const results = await searchApiDatabase(request.params.arguments.query);
return { content: [{ type: "text", text: JSON.stringify(results) }] };
}
});
const transport = new StdioServerTransport();
await server.connect(transport);
This server works with Claude, GPT, Gemini, Cursor, and any other MCP-compatible client — without modification.
The MCP Ecosystem in 2026
Server Categories (10,000+ published)
- Developer tools: GitHub, GitLab, Jira, Linear, Figma, Notion
- Data sources: PostgreSQL, MySQL, MongoDB, Supabase, Neon
- Cloud services: AWS, Cloudflare, GCP, Azure
- Communication: Slack, Discord, Gmail, calendar systems
- Observability: Datadog, Axiom, Sentry, PagerDuty
- AI infrastructure: Vector databases, embedding services, model registries
Discovery and Registry
The November 2025 MCP spec update introduced an official community-driven registry for discovering MCP servers. This solves the discoverability problem — instead of hunting GitHub repos, developers can browse a curated registry of production-ready MCP servers.
Security
The AAIF and enterprise deployments have driven significant security improvements:
- Server identity verification via cryptographic signing
- Tool annotation flags (
readOnly,destructive) for automated safety checks - Authentication standards for remote MCP server connections
- Enterprise WAF compatibility through Streamable HTTP
Practical Integration Decision Guide
Build with MCP when:
- Your tool needs to work with multiple AI providers (almost always true)
- You want to list your tool in public MCP registries
- Your enterprise needs standardized security and audit controls
- You're building tools for agentic workflows (multi-step, multi-model)
Use function calling alone when:
- You're building a prototype that only needs to work with one provider
- Your tool is deeply provider-specific (OpenAI computer use, Claude extended thinking)
- You need capabilities not yet standardized in MCP
Ignore deprecated approaches:
- OpenAI Assistants API (deprecated)
- OpenAI proprietary plugins (deprecated, replaced by MCP submissions)
- Provider-specific extension systems (migrating to MCP)
The Road Ahead
The Linux Foundation's AAIF governance means MCP evolves as neutral infrastructure, not a single vendor's product. The 2026 roadmap includes:
- Agent-to-Agent (A2A) protocol standardization — extending MCP beyond tool use to agent coordination
- Network-native integrations — CAMARA API integration enabling AI to consume real-time telecom network context
- Security framework — standardized authentication and authorization across the ecosystem
Verdict
The "MCP vs plugins vs extensions" debate is settled. OpenAI deprecated their proprietary systems. Google joined the AAIF. Every major IDE, AI platform, and enterprise AI infrastructure vendor has adopted MCP.
If you're building AI tool integrations in 2026, there's one clear answer: build MCP servers. The protocol is open source, Linux Foundation-governed, and works across every platform that matters. The alternative — building separate integrations per provider — is technical debt you're creating on day one.
The real question isn't MCP vs alternatives. It's which tools in your infrastructure to expose as MCP servers first.
Building tools for AI agents? Explore the MCP ecosystem and compare API capabilities for 100+ services at APIScout.