MCP: A First Look at the Protocol Reshaping AI Application Development
Breaking: Two major developments this week have catapulted MCP from promising experiment to industry standard practically overnight:
OpenAI announced they'll be supporting MCP across their products, with integration available today in their agents SDK and support for ChatGPT desktop app and responses API coming soon.
Anthropic shared that a new version of the MCP spec was finalized today, introducing critical improvements including an auth framework based on OAuth 2.1, streamable HTTP transport replacing the previous HTTP+SSE implementation, support for JSON-RPC batching, and tool annotations for better describing tool behavior.
In just four months, MCP has evolved from an idea at Anthropic about making integrations easier for developers to becoming the de facto standard for AI app integrations across the industry.
Here are my initial thoughts on this rapidly evolving technology and what it means for both builders and users.
What Is MCP, Really?
Let's cut through the marketing: MCP is a protocol, not a product. Think of it as today's equivalent of early USB or OAuth standards. It defines a way for AI assistants like Claude to communicate with external applications through specialized "servers" without requiring custom code for each integration.
The promise is compelling: write integration code once as an MCP server, and any compatible AI assistant (or application) can use it. This standardization could eliminate countless redundant implementations and eventually make AI-to-application connections as seamless as authenticating with "Login with Google."
The Builder's Perspective
As someone who's built AI products, I approach MCP with cautious optimism. The protocol doesn't eliminate the need to write code — someone still needs to build those MCP servers. What it does is create the potential for standards that allow that code to be written once and used across multiple AI hosts.
Currently, implementing MCP servers requires more technical savvy than most users possess. You're still editing JSON files and managing API keys. This technical barrier means we're firmly in early adopter territory.
Initially, MCP looked like Anthropic's version of Android to OpenAI's iOS. While OpenAI had built a closed-garden approach with GPT Store and plugins, Anthropic appeared to be betting on an open ecosystem that other AI companies might rally behind to compete collectively against the market leader.
With today's announcement from Sam Altman, this dynamic has shifted significantly. OpenAI's embrace of MCP suggests we may be moving toward a more standardized ecosystem rather than competing walled gardens. This could accelerate developer adoption dramatically, as building MCP servers now potentially grants access to both Claude and ChatGPT users.
The User Experience
For users, MCP's promise is enabling AI assistants to actually do things rather than just talk about them. Instead of asking Claude to write a script to download regulatory documents, you could simply ask Claude to download them directly. That's powerful.
In practice, however, the experience remains hit-or-miss. Even after installing an MCP server, you might find yourself repeatedly prompting "Please use the database server" while Claude seemingly ignores your request. The non-deterministic nature of LLMs means you can't guarantee your MCP server will be called even when it's the perfect tool for the job.
When it works, though, it's genuinely impressive. I've seen demos where Claude analyzes database contents or updates blog tags across platforms. These experiences feel magical compared to traditional interfaces.
The Missing Pieces
Earlier this month, several crucial elements still needed addressing for MCP to reach mainstream adoption:
Installation and discovery - There was no centralized marketplace or one-click installation process
Authentication and permissions - The protocol lacked built-in OAuth-like functionality for managing access across users
Function namespace collision - All functions lived in the same namespace, creating challenges as collections of MCP servers grew
Testing and reliability - Building evaluation frameworks was difficult when you couldn't predict which model would process your requests
Remarkably, today's spec update directly addresses several of these concerns. The addition of an OAuth 2.1-based authentication framework resolves one of the most significant barriers to enterprise adoption. Just as I predicted earlier, we're seeing internet protocols building on top of each other - MCP isn't reinventing authentication but leveraging established standards.
This pattern of layered protocols mirrors how the internet itself developed. Each new protocol builds upon existing foundations rather than starting from scratch. MCP represents a new layer connecting probabilistic AI systems to deterministic applications, following the same evolutionary pattern.
Interfaces of the Future
Is MCP bound to be used only in AI assistants, and will conversational interfaces become dominant, or are they just a transitional technology?
While natural language reduces the barrier to discovering functionality (no more remembering exact menu options), the cognitive load of expressing yourself through language remains significant. Most users don't want to craft perfect prompts — they want buttons, visual elements, and guided experiences.
The most likely future combines these approaches: purpose-built applications with clear visual interfaces for common tasks, augmented by conversational capabilities for flexibility and exploration.
MCP is valuable regardless of interface evolution because it standardizes how AI-native applications connect to traditional software, creating a consistent bridge between probabilistic and deterministic systems. Whether accessed through chat, buttons, or voice commands, the underlying protocol enables seamless integration across application types.
Looking Forward
With both OpenAI and Anthropic firmly behind MCP, the question of industry adoption seems settled – this protocol is now poised to become the standard way AI assistants interact with external applications. The speed of this convergence is unprecedented in protocol development, compressing what would typically take years into mere months.
The rapid evolution of the MCP spec itself is equally impressive. The inclusion of OAuth-based authentication, streamable HTTP transport, and improved tool annotations addresses several of the key limitations I observed just weeks ago. This pace of improvement suggests we may see solutions to remaining challenges like function namespace collision and testing frameworks sooner rather than later.
For builders, this represents a clear signal to invest in MCP development. Building MCP servers now grants access to both Claude and ChatGPT users – covering the vast majority of the consumer AI assistant market. This dramatically changes the ROI calculation for developers who were hesitant to commit resources to an unproven protocol.
These are just my initial impressions as the landscape evolves incredibly quickly. By the time you read this, more details about implementation timelines or additional improvements to the spec may have emerged. The AI ecosystem changes daily, and we're all figuring this out in real-time.
What's clear is that MCP represents a fundamental shift in how AI systems interact with external applications. Just as HTTP standardized web communications and OAuth standardized authentication, MCP is standardizing the bridge between probabilistic AI systems and deterministic applications – creating the foundation for the next wave of AI-native software.