Model Context Protocol (MCP)

Model Context Protocol (MCP) is an open standard for connecting AI applications to external systems — databases, APIs, tools, and workflows. Often described as “the USB-C port for AI applications,” MCP replaces the proliferation of one-off custom integrations with a single, universal protocol.

The Problem It Solves

Before MCP, every AI assistant required bespoke connectors for each external tool. Teams building on top of LLMs maintained fragmented integration code: one adapter for Slack, another for GitHub, another for PostgreSQL. Each was brittle, non-transferable, and expensive to maintain. MCP standardizes this layer so a server written once can connect to any MCP-compatible host — Claude, ChatGPT, VS Code Copilot, Cursor, and others.

Architecture: Hosts, Clients, and Servers

MCP follows a three-participant model:

  • MCP Host: The AI application (e.g., Claude Desktop, VS Code) that coordinates connections
  • MCP Client: A component inside the host that maintains a dedicated connection to one server
  • MCP Server: A program that exposes capabilities — runs locally (via stdio) or remotely (via HTTP)

A host can simultaneously connect to multiple servers, creating one client instance per server. Communication uses JSON-RPC 2.0 over either stdio (local, no network overhead) or Streamable HTTP (remote, supports OAuth authentication).

The Three Server Primitives

MCP servers expose three types of capabilities:

  • Tools: Executable functions the AI can invoke (file operations, API calls, database queries)
  • Resources: Data sources providing contextual information (file contents, schema definitions, API responses)
  • Prompts: Reusable interaction templates (system prompts, few-shot examples)

Clients discover these primitives dynamically via list methods (tools/list, resources/list, prompts/list), enabling servers to update their capabilities at runtime. Servers can notify connected clients when their tool list changes, keeping hosts synchronized without polling.

Security Considerations

Because MCP tool descriptions are injected directly into the system prompt, connecting to an untrusted MCP server creates a prompt injection attack surface. A malicious server could shape agent behavior through its tool documentation. Additionally, locally-running servers execute code on the host machine. Practitioners recommend:

  • Only connect to servers you control or explicitly trust
  • Disable servers not actively in use — excess tool descriptions consume context budget and degrade agent performance by pushing models “into the dumb zone”
  • Prefer CLI tools the model already understands from training data when a well-known tool has no MCP-specific advantage

Early Adoption and Ecosystem

Launched by Anthropic in late 2024, MCP was immediately adopted by Block, Apollo, Zed, Replit, Codeium, and Sourcegraph. It underpins how Harness-Engineering expands AI-Coding-Agent-Architecture beyond file I/O into broader system integration, and it is a building block for more advanced agent tooling patterns explored in Context-Engineering.

Sources

Note

This content was drafted with assistance from AI tools for research, organization, and initial content generation. All final content has been reviewed, fact-checked, and edited by the author to ensure accuracy and alignment with the author’s intentions and perspective.