Similar to how HTTP defined internet communication standards in the early 1990s, we now see the emergence of a new layer of interoperability, the Model Context Protocol (MCP), a standard that enables large language models (LLMs) and AI assistants to interact consistently with external tools, data sources, and services.

Yet, as with past technological booms, many businesses without a genuine AI foundation label themselves as “AI companies” to ride the wave of enthusiasm. History doesn’t repeat exactly, but it rhymes: during the Dot-Com Bubble of 1999, adding ‘.com’ to a company’s name inflated valuations overnight (Investopedia). In the Gen-AI Boom of 2024-25, calling a product “AI-powered” plays a similar role — attracting rapid funding and speculative attention.

The reality check is that while the internet’s transformative potential was real, enduring value accrued primarily to robust platforms. The AI landscape today similarly favors substantial companies over single-feature startups.

In the sections ahead, we define what MCP is (and isn’t), compare it with HTTP, answer whether it replaces HTTP, spotlight a creator use case for AI search, map where your business fits with an Executive Cheat Sheet, and close with the bottom line.

HTTP in the ’90s → MCP in the mid-2020s

MCP isn’t just another protocol — it echoes the core design choices that made HTTP endure. The points below highlight three such traits: simplicity of connection, compounding value through standardization, and a resilient ecosystem built around stable, shared infrastructure.

  • A streamlined, universal connector. Before HTTP, serving a single document online required separate FTP, Gopher, and CGI setups, each with its own client software and link syntax (Berners-Lee, 1989). HTTP consolidated this into one protocol and address scheme, allowing universal browser access. MCP targets a similar pain point for AI: today, every “AI integration” has its own REST schema and JSON structures. MCP replaces that complexity with a single JSON-RPC dialect plus a capability handshake. An LLM client can list a server’s tools (actions), resources (data), and prompts (expert instructions) at runtime (see MCP overview), then call them without custom wrappers. Streamable HTTP and stdio are the standard transport mechanisms. It also supports custom transports — which means WebSockets or other bidirectional channels can be implemented, though they aren’t specified as core transports in the standard. Last but not least, MCP adoption spans both model labs and developer ecosystems — from Anthropic and OpenAI to Google’s Toolbox for Databases, VS Code, and Replit — signaling a move toward standardized AI-to-tool interoperability.

  • Network effects outweigh isolated features. HTTP’s ubiquity created a flywheel: each new web page added network value to the protocol, reinforcing incentives for further adoption. The same dynamic may be at play with MCP: each additional host or server not only uses the standard but also increases its utility for all existing participants — a pattern consistent with diffusion-dynamics in network structures (Myers et al., 2012; Standard Diffusion in Growing Networks, 2014).

  • Infrastructure lasts, applications change. Early web browsers came and went, but companies like Cisco, Akamai, and AWS built the backbone of the internet by running the underlying infrastructure. MCP will likely follow this pattern: its long-term value will be with those who operate the infrastructure and maintain the key datasets that power LLMs. True progress, though, comes from the interaction between strong infrastructure and evolving applications—where solid foundations make new products and experiences possible.


MCP vs. HTTP

MCP serves as an AI-focused alternative to integration standards like SOAP, RAML, and OpenAPI. SOAP offers messaging, RAML and OpenAPI provide interface definitions; MCP integrates both transport and capability discovery.

Basic HTTP endpoints often require additional tooling like OpenAPI or custom documentation for AI integration - which can lead to errors and inefficiencies. MCP simplifies complexity into three fundamental primitives:

  • Tools – perform actions (akin to HTTP’s POST/PUT)

  • Resources – fetch information (akin to HTTP’s GET)

  • Prompts – expert guidance (no HTTP equivalent)

Each primitive has structured descriptions acting as built-in documentation, enhancing usability.

Conceptually, MCP complements other interfaces:

  • Frontend → Humans

  • REST API → Traditional apps/services

  • MCP API → AI agents

While humans can technically use REST APIs and apps can access frontends, dedicated interfaces optimize usability.

Anthropic, which developed MCP, compares it to USB-C: a clean, universal connector that replaced fragmented standards and improved usability across devices (Anthropic’s MCP Intro). It’s a sharp analogy. Still, the fuller picture comes from HTTP, whose standardization created not just convenience but an enduring ecosystem—an evolution MCP may now replay in the AI stack.

Traditional protocols like gRPC, GraphQL, and OpenAPI focus on deterministic clients - they define static interfaces for pre-coded systems. MCP, by contrast, introduces semantic interoperability for probabilistic agents: it lets models dynamically discover available tools, data, and playbooks at runtime. Instead of hard-coded API schemas, it provides self-describing capability layers that LLMs can query, reason about, and invoke safely. In short, where a typical protocol documents how to call a system, MCP lets the system describe itself to an AI.


Will MCP Replace HTTP?

Aspect HTTP Strengths MCP Strengths
Reach Ubiquitous, browser-supported AI-focused, precise interactions
Tooling needs External schemas/documentation Built-in discovery and documentation
Best suited for Web interfaces, static content AI workflows, context management

HTTP remains essential for browsers, caches, and CDNs. MCP complements HTTP by optimizing deterministic interactions and AI-first semantics.

Remark: MCP gives clients predictable call semantics and audit-friendly error handling by standardizing the envelope (JSON-RPC, tool/error schemas). Truly deterministic results—returning the exact same bytes for the same request — require a policy layer above the core protocol (e.g., content-addressed URIs or versioned snapshots).


Creators: AI Search via MCP

Google’s recent shift toward AI-generated answers, rather than linking to external websites, has raised concern among creators about the sustainability of the web’s funding model. It challenges the long-standing exchange where attention and traffic sustained content ecosystems. Yet this shift also exposes the limits of that model: value creation has been mediated by algorithms that decide who gets visibility and revenue. MCP reframes the equation. Instead of displacing creators, it introduces a direct interoperability layer where verified data and expertise can flow to AI systems transparently and be credited and compensated by design. What looks like disruption may, in fact, be the start of a fairer value chain for digital knowledge.

Instead of publishing HTML pages for search engines to crawl, content creators could expose structured Resources (data) and curated Prompts (expert interpretations) directly through standardized MCP servers. AI-driven search engines could then access and present verified, authoritative content, clearly crediting original providers and potentially compensating them on a per-query basis. This model transforms creators into Context Providers — active participants in a verifiable knowledge economy with deterministic logs and usage-based rewards, anchored to transparent value exchange rather than the swings of ad-driven traffic.


Executive Cheat Sheet: Where MCP Fits in the AI Stack

Use this to quickly place your business or product and see the landscape clearly. MCP is an interface, not a business model. It connects AI models to data and workflows so systems become more interoperable and reliable. Some players act mainly as MCP servers (exposing data or actions); others as clients (calling them); many are both—or none at all.

AI Layer What they actually do Honest label MCP interface (typical) Durability driver
Model Labs Build/serve foundation models AI company Client to enterprise Tools/Resources Model IP + research velocity
Infra Platforms Compute, data, orchestration Infra-AI Both (host servers; orchestrate clients) Distribution + ecosystem lock-in
Domain SaaS (vertical apps) Ship workflows with embedded AI AI-enabled software Both (Tools/Resources + Playbook Prompts*) Workflow ownership, switching costs
Context Providers (data/content) Provide proprietary datasets/POV Context provider Server (Resources/Prompts) Unique data, credibility, refresh rate
AI-washing Superficial “AI” add-ons Hype-risk None (thin—nothing to standardize) Fragile without real capability

Playbook Prompts: prompts that encode your standard operating procedures so the AI follows your steps consistently.

Heuristic: If you own workflows, expose them as Tools. If you own data/context, expose them as Resources/Prompts. If you build models/agents, be a strong Client. If your strategy is “add MCP”, first decide what you own.


Bottom Line

What MCP is. The interface layer that lets AI agents operate real systems — standardizing Tools (actions), Resources (data), and Prompts (expert playbooks). Think HTTP for AI operations.

What it isn’t. Not a replacement for HTTP; it complements it with discovery, auditability, and interoperability for AI workflows.

Why now. AI is moving from chat to execution. Without a standard, every integration is brittle glue code with weak governance. MCP brings predictable calls, capability discovery, and traceable outcomes.

In summary. MCP turns fragmented AI experiments into integrated, governable workflows — favoring teams that own data or workflows and ship reliable interfaces over those selling hype. Winners will own valuable assets (data, infrastructure, trust) — not just labels. Decide whether you are a model lab, infrastructure provider, or context provider; otherwise, you’re likely their customer.


Acknowledgments

This essay benefited from the thoughtful input of two friends. Ygor Rebouças provided a meticulous review of the text and formatting, helping refine the reader experience and narrative flow in key sections. Leandro Lima contributed through long-running technical discussions on MCP’s design and use cases—conversations that, over months of informal exchange, sharpened the ideas and pushed the author to explore the topic in greater depth. Their perspectives meaningfully shaped the clarity and depth of this work. I’m sincerely thankful for their time, insights, and generosity in sharing their expertise.