In late 2024, Anthropic released the Model Context Protocol (MCP) as an open standard. By 2026, it's become the dominant way to connect AI agents to external tools, data sources, and services. If you're building AI agents that interact with real systems — Jira, GitHub, Slack, databases, file systems — you should understand MCP and when to use it.
The Problem MCP Solves
Before MCP, connecting an AI agent to external tools meant writing custom integration code for every combination. Each AI framework had its own tool definition format. Each tool provider wrote its own integration. The result was a fragmentation problem that made building multi-tool agents significantly more work than it should have been.
Consider a developer who wants to build an agent that can read GitHub issues, update Jira tickets, and post to Slack. Before MCP, they would:
- Write a GitHub tool implementation for their specific agent framework
- Write a Jira tool implementation
- Write a Slack tool implementation
- Repeat this for every new project, every framework change, every model switch
The same tools, reimplemented over and over.
MCP solves this by defining a standard protocol. Any MCP-compliant client (Claude Desktop, Cursor, a custom agent) can connect to any MCP-compliant server (GitHub, Jira, Slack) without custom integration work. Write the server once; use it everywhere.
How MCP Works
MCP uses a client/server architecture.
The MCP server exposes capabilities — tools, resources, and prompts — over a standard interface. It runs as a separate process, either locally or remotely.
The MCP client is embedded in the AI application. It discovers the server's capabilities and makes them available to the LLM.
The three capability types:
Tools are functions the LLM can call. A GitHub MCP server might expose tools like create_issue, get_pull_request, list_repos. The LLM calls these tools and receives structured results. This is equivalent to the tool use / function calling you'd implement manually.
Resources are data sources the LLM can read. A filesystem MCP server exposes files as resources. A database MCP server exposes query results. Resources are read-only — they provide context, not actions.
Prompts are reusable prompt templates the server exposes. An application can select a prompt by name and the server returns the configured prompt text. This is less commonly used but allows server-side prompt management.
The transport layer uses JSON-RPC 2.0, typically over stdio (for local servers) or HTTP+SSE (for remote servers).
┌─────────────────────────────────────┐
│ AI Application │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ LLM (Claude)│ │ MCP Client │ │
│ └──────┬───────┘ └──────┬───────┘ │
│ │ tool requests │ calls │
└─────────┼─────────────────┼──────────┘
│ │
└─────────────────┘
│ JSON-RPC
┌─────────┴────────────────────┐
│ MCP Server │
│ tools / resources / prompts │
└──────────────────────────────┘
│
┌─────────┴────────────────────┐
│ External System │
│ (GitHub / Jira / Database) │
└──────────────────────────────┘
Real MCP Servers in Production
The ecosystem has grown quickly. Here are servers that engineers actually use:
GitHub MCP — create and read issues, manage pull requests, search repos, read file contents. Used for coding agents that need repository context.
Jira MCP — search issues by JQL, create and update tickets, transition status, read sprint boards. Used for planning agents and project management automation.
Slack MCP — read messages, post to channels, list workspace members. Used for notification agents and standup automation.
Filesystem MCP — read and write local files, list directories. Used for document processing agents. Important: scope it to a specific directory, not root.
Postgres/SQLite MCP — execute read-only queries, inspect schema, describe tables. Used for data analysis agents.
Brave Search MCP — web search. Used to give agents access to current information.
Claude Desktop has native MCP support — you configure servers in ~/Library/Application Support/Claude/claude_desktop_config.json and they become available in the UI. Cursor and other editors are adding MCP support rapidly.
Building a Simple MCP Server in Python
The mcp Python package makes building servers straightforward:
# pip install mcp
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
app = Server("task-manager")
TASKS = [] # In production, use a real database
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="add_task",
description="Add a new task to the task list",
inputSchema={
"type": "object",
"properties": {
"title": {"type": "string", "description": "Task title"},
"priority": {
"type": "string",
"enum": ["low", "medium", "high"],
"description": "Task priority"
},
},
"required": ["title"],
},
),
types.Tool(
name="list_tasks",
description="List all tasks, optionally filtered by priority",
inputSchema={
"type": "object",
"properties": {
"priority": {"type": "string", "enum": ["low", "medium", "high"]},
},
},
),
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "add_task":
task = {
"id": len(TASKS) + 1,
"title": arguments["title"],
"priority": arguments.get("priority", "medium"),
"done": False,
}
TASKS.append(task)
return [types.TextContent(type="text", text=f"Task created: {task}")]
if name == "list_tasks":
priority_filter = arguments.get("priority")
filtered = [t for t in TASKS if not priority_filter or t["priority"] == priority_filter]
return [types.TextContent(type="text", text=str(filtered))]
return [types.TextContent(type="text", text=f"Unknown tool: {name}")]
if __name__ == "__main__":
import asyncio
asyncio.run(stdio_server(app))
To run this as an MCP server and connect it to Claude Desktop, add it to the config:
{
"mcpServers": {
"task-manager": {
"command": "python",
"args": ["/path/to/task_server.py"]
}
}
}
Claude Desktop will start the server automatically and expose its tools in every conversation.
MCP vs. Custom Tool Use: When to Use Which
Both patterns let an LLM call functions. The choice depends on your situation.
Use MCP when:
- You want the tool to work across multiple AI applications (Claude Desktop, Cursor, a custom agent, a web app). MCP gives you portability.
- You're building a tool that others will use. Publishing an MCP server is easier than building per-framework integrations.
- You're connecting to a system that already has an MCP server. Don't build what already exists.
- You want the tool to be available in interactive AI environments like Claude Desktop without writing a custom UI.
Use custom tool use (API-level function calling) when:
- You're building a single-application agent and the tool will never be reused elsewhere.
- You need fine-grained control over the tool's execution environment — authentication that runs inside your application, access to in-memory state, database connections your server manages.
- You're already deep in a specific framework (LangChain, LangGraph, CrewAI) and the overhead of running a separate MCP server process is not worth it.
- The tool is so simple (a calculation, a string transformation) that adding MCP infrastructure would be over-engineering.
The pattern isn't either/or. A production agent might use MCP for external service integrations (GitHub, Jira) and custom tool use for internal operations (querying your application database, accessing your authentication context).
What This Means for AI Engineering
MCP is the plumbing that makes the AI agent ecosystem composable. Instead of every team reimplementing GitHub integrations, Slack integrations, and database connectors, the ecosystem converges on shared, well-maintained implementations.
If you're building AI products, you have two useful things to do with MCP:
-
Consume existing MCP servers. Before building a custom integration, check whether an MCP server for that service already exists. The time savings are significant.
-
Build MCP servers for internal tools. If your company has internal APIs, databases, or services that AI features need to access, wrapping them as MCP servers makes them available across all your AI applications — not just the one you built first.
If you want to go deep on MCP — the full protocol spec, authentication patterns, remote server deployment, and how to build production-grade MCP servers for internal tools — Phase 7 of the Agentic AI course at MindloomHQ covers it across 8 dedicated lessons. Phases 0 and 1 are completely free to start.