What Is MCP? The Universal Connector for AI Agents

Universal connector concept

While building an internal chatbot for ticket triage, I hit the same wall most teams hit: every tool needed a custom connector. Jira had one integration, Zendesk had another, Slack had a third. Every time an API changed, half the integration code broke. The setup was fragile, siloed, and hard to debug.

That’s when I found MCP. Model Context Protocol is an open standard that gives AI agents a single way to connect to tools, APIs, and data sources. Instead of writing custom integrations for every combination of model and tool, you implement the protocol once on each side.


The Problem MCP Solves

Without a standard protocol, connecting AI models to tools creates an N-times-M problem. Every model needs a custom integration for every tool. Three models and five tools means fifteen connectors to build and maintain. One API change can break any of them.

MCP reduces this to N-plus-M. Each tool implements the protocol once. Each model connects once. Three models and five tools means eight integration points instead of fifteen.

Think of it like USB-C for AI. Before USB-C, every device had its own charger and cable. USB-C gave manufacturers one standard to target. MCP does the same thing for AI agent integrations: build the connector once, and any compatible model can use it.


How MCP Works

MCP defines three components that work together:

  • Host: The AI application that the user interacts with. This could be Claude Desktop, a VS Code extension, or a custom app you build yourself.
  • Client: The connector inside the host that speaks MCP protocol. Each client maintains a one-to-one connection with a server.
  • Server: A service that exposes tools, resources, or prompts via MCP. It wraps an existing API, database, or system and makes it available to any MCP-compatible host.

The flow works like this: a user sends a query to the host. The host, through its client, routes the request to the appropriate MCP server. The server executes the action (querying a database, calling an API, reading a file) and sends the result back through the same chain.

Problem Traditional Approach With MCP
Integration Custom for each service Reusable, standardized
Scaling Rewriting logic per app Centralized, plug-and-play
Debugging Hard to trace errors Structured logging + tools
Switching LLMs Risky and vendor-locked Decoupled from the model

Because each piece speaks a common protocol, you can swap models, add new tools, or replace backends without rewriting the rest of the stack.


What an MCP Server Looks Like

The Python SDK includes FastMCP, a lightweight framework that makes building MCP servers straightforward. Here’s a minimal example:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Demo")

@mcp.tool()
def add(a: int, b: int) -> int:
    return a + b

@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
    return f"Hello, {name}!"

Decorators define tools and resources. Type hints generate the schema automatically. FastMCP handles all the protocol details under the hood, so you write normal Python functions and they become available to any MCP-compatible host.


Where MCP Fits Today

MCP has moved well past early adoption. Claude Desktop ships with native MCP support. VS Code extensions like Cline and Continue use it for tool integration. The community has built hundreds of servers covering databases, cloud APIs, file systems, and developer tools.

The protocol itself is production-ready. I put the architecture to the test by connecting a legacy Redmine system to an AI agent using MCP, with no changes to the original platform required.

The ecosystem around it is still maturing. Auth standardization is underway but not finalized. Enterprise observability tooling is limited. Remote server hosting patterns are still being established. These are solvable problems, and the pace of development is fast, but they’re worth knowing about if you’re evaluating MCP for production use today.


Going Deeper

This post covers the concepts. These posts go hands-on:

Kevin Tan
Written by

Cloud Solutions Architect and Engineering Leader based in Singapore. I write about AWS, distributed systems, and building reliable software at scale.

Discussion

Comments are powered by GitHub Discussions. Sign in with GitHub to join the conversation.