Every time a new AI agent needs access to a new system, someone writes a new integration. Your Slack tool, your GitHub tool, your PostgreSQL tool — each one is bespoke, each one lives inside one app, and none of them work with anyone else's agent. That's the tool-use problem MCP was designed to fix.
Model Context Protocol (MCP) is the attempt to make tools reusable. Write a connector once, and any MCP-aware agent — Claude, Cursor, Zed, custom builds — can use it. This post explains what MCP actually is, how it differs from the function calling you already know, and how to build your first MCP server in 30 lines of Python.
What you'll learn
- The fragmentation problem MCP exists to solve
- The three-role architecture (host, client, server) and why it matters
- MCP vs function calling — they're not competitors, they compose
- How to build an MCP server and try it in Claude Desktop
- Real-world use cases that are only possible with MCP
The Tool Use Problem MCP Solves
Before MCP, every AI app had its own idea of "tools." LangChain had its format, OpenAI had a different one, Anthropic had yet another, and every enterprise IT team bolted their own wrappers on top. The result:
- A Slack integration for one agent did not work for another
- Every new agent had to re-implement every connector
- Enterprise review of "can this agent touch our database?" had to repeat per app
- Tool authors had no way to publish a reusable integration
MCP, announced by Anthropic in late 2024 and widely adopted across the industry by 2026, defines a protocol — a wire format and a set of methods — that any tool server and any agent host can speak. Write the server once, and every MCP-aware agent can use it. Think of it as the "USB" for AI tools: a standard plug instead of bespoke wiring.
If you want the original motivation, Anthropic's MCP introduction is worth reading.
How MCP Works (Architecture)
MCP defines three roles:
Host — the application the user interacts with. Claude Desktop, Cursor, Zed, your custom agent — these are hosts. Hosts manage LLM calls, show the UI, and launch MCP clients.
Client — a connection to one MCP server. The host spawns one client per server it wants to use. Clients translate between the host and the server's MCP protocol.
Server — exposes tools, resources, and prompts to clients. Your GitHub server, your Postgres server, your filesystem server — each is a separate process speaking MCP.
┌─────────────┐ ┌──────────┐ ┌──────────────┐
│ Host │◀──────▶│ Client │◀──────▶│ MCP Server │
│ (Claude App)│ JSON │ │ JSON │ (GitHub) │
└─────────────┘ RPC └──────────┘ RPC └──────────────┘
The wire protocol is JSON-RPC 2.0, usually over stdio (for local servers) or streamable HTTP (for remote ones). Messages cover three main capabilities:
- Tools — functions the model can call (
listIssues,createPR) - Resources — read-only context the model can load (a file, a DB schema, a wiki page)
- Prompts — reusable prompt templates the user can invoke
An agent host like Claude Desktop calls tools/list on connect, remembers what each server offers, and at inference time exposes those tools to the model exactly like regular function-calling tools. The model picks a tool; the host dispatches the call to the right MCP server; the result comes back through the client.
MCP vs Function Calling
This is the question developers ask first, and the answer surprises them: MCP does not replace function calling — it wraps it.
Function calling is the model-side mechanism. The LLM is trained to emit structured tool-use requests given a tool schema. Every major model provider supports it. It's a message format, not a distribution model.
MCP is a distribution and integration protocol. It says nothing about how the model emits tool calls. What it defines is how a tool lives outside the model app — how it advertises itself, how it's invoked, how permissions are scoped.
Put together: the model decides which tool to call (function calling). The host routes that call to the right MCP server (MCP). Same mechanism on the model side, totally different story on the operator side.
The practical implication is huge. A Postgres MCP server your DBA wrote once can now be used by:
- Claude Desktop for ad-hoc analysis
- Your in-house support agent
- A data engineer's Cursor session
- Any future agent that speaks MCP
Without MCP, each of those needed a custom integration. With MCP, it's one server and four clients.
For a broader primer on what MCP is, see our existing post on Model Context Protocol.
Building Your First MCP Server
Let's build one. The target: an MCP server that exposes two tools — a note reader and a note writer — backed by a local file. You need Python 3.11+ and the mcp SDK.
uv venv
source .venv/bin/activate
uv pip install "mcp[cli]"
Then the server itself:
# server.py
from pathlib import Path
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("notes")
NOTES = Path.home() / ".mcp_notes.txt"
@mcp.tool()
def add_note(text: str) -> str:
"""Append a note to the user's personal notes file."""
with NOTES.open("a") as f:
f.write(text.rstrip() + "\n")
return f"added: {text}"
@mcp.tool()
def list_notes() -> str:
"""Return all saved notes."""
if not NOTES.exists():
return "(no notes yet)"
return NOTES.read_text()
if __name__ == "__main__":
mcp.run()
That's a complete MCP server. The @mcp.tool() decorator turns each function into a tool the model can call; the docstring becomes its description and the type hints become its input schema.
To register it with Claude Desktop, add this to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"notes": {
"command": "python",
"args": ["/absolute/path/to/server.py"]
}
}
}
Restart Claude Desktop. Open a new conversation and say "add a note that I need to renew my passport." The app will detect the tool, ask you to approve the call, and you will see your ~/.mcp_notes.txt file populate. Ask "what are my notes?" and it will list them back.
You just wrote a distributed agent tool in 20 lines, and every MCP-aware app on your machine can now use it.
Real-World MCP Use Cases
Patterns that are working in production in 2026:
Internal knowledge access. An MCP server that exposes your company wiki, ticket system, or CRM. Developers can use it in Cursor while coding. Support agents can use it through a chat interface. Both get the same consistent view of internal data.
Database connectors. A read-only Postgres MCP server lets analysts run natural-language queries without handing raw DB credentials to every AI tool. The server enforces the permission boundary — the agent never sees the connection string.
Dev environment integration. IDE plugins (Cursor, Zed, the VS Code extensions) ship with MCP client support. Teams publish an internal MCP server for "our codebase conventions" or "our deployment runbook" and every developer's agent immediately has access.
Personal automation. Home-lab setups where one MCP server exposes your Obsidian vault, another your calendar, a third your smart home. A single chat agent orchestrates across all of them.
Enterprise governance. Central IT publishes a catalog of approved MCP servers with documented scopes and audit trails. Individual teams can light up agents without re-doing the security review every time.
The Future of Agent-Tool Communication
A few predictions worth watching:
- MCP servers become as common as npm packages. Expect a registry, versioning, and a long tail of community servers for common SaaS tools.
- Remote MCP matters more than local. Streamable HTTP MCP servers hosted by SaaS vendors mean you don't install anything — you authorize a connection. This is closer to OAuth than to npm install.
- Agent-to-agent becomes standard. Agents calling agents via MCP (or a related protocol) is already here. Expect this to feel normal by late 2026.
- Tool permissions get strict. Early MCP integrations trusted tool results blindly; 2026 deployments treat tool output as user-input-class data — scanned, validated, and bounded.
If you want to track the spec, the official MCP documentation has the authoritative reference.
Conclusion
MCP is not the thing that makes agents possible — function calling already did that. MCP is the thing that makes agent tools portable. Write once, use everywhere. It's mundane infrastructure, and that is exactly why it will be everywhere by the end of 2026.
If you build agents, learn MCP now. If you maintain internal systems, publish an MCP server for them this quarter. The cost is low; the leverage compounds every time a new agent speaks the protocol.
For the full agent stack — from loops to tools to MCP to production — the MindloomHQ curriculum covers it in order, and our Agentic AI Development course is the flagship path.
FAQ
Is MCP only for Claude?
No. MCP was introduced by Anthropic but is an open protocol. Cursor, Zed, continue.dev, and many custom agent hosts support it. OpenAI and Google have their own complementary efforts, and many tool servers work across hosts.
Does MCP replace LangChain or other agent frameworks?
No. Frameworks orchestrate the agent (planning, memory, the loop). MCP standardizes how tools are packaged and discovered. A LangChain agent can use MCP servers as its tool source; they compose rather than compete.
Can I write an MCP server in a language other than Python?
Yes. Official SDKs exist for TypeScript, Python, Go, Rust, Java, C#, and more. The protocol is language-agnostic JSON-RPC. Most teams pick the language that matches the system they're exposing.
What's the security model for MCP?
Trust boundary sits at the host. Hosts typically prompt the user before the first use of each server and often per-tool. Servers should enforce their own authz (the agent is just another client). Treat an MCP server like any other service in your network: least privilege, scoped credentials, audit logs.
When should I build an MCP server vs just using function calling?
Build an MCP server when more than one agent or app will use the same tool, when you want to separate tool lifecycle from agent lifecycle, or when you need a clear permissions boundary. For a one-off tool used inside a single codebase, plain function calling is simpler.