How I Linked a Legacy System to a Modern AI Agent — with MCP
“Most legacy systems were never built for AI — but that doesn’t mean they can’t be part of the future.”
Recently I needed to give an AI agent access to some ticket/information stored in Redmine — a trusted, long-standing system that still powers a lot of serious project work.
While like many legacy system, Redmine wasn’t built with AI agents in mind, its structured data made it a great candidate for context-aware automation.
Having already explored the Model Context Protocol (MCP) , I saw this as the perfect opportunity to put it into action.
Instead of rewriting or wrapping the system in brittle APIs, I built a clean MCP bridge — letting AI agents query Redmine content directly, with zero changes to the original platform.
Photo by Mert Kahveci on Unsplash
🧠 Why Connect Legacy Systems to AI Agents?
What is MCP?
MCP (Model Context Protocol) is an open standard developed to connect large language models (LLMs) to real-world tools, data, and services. It’s being rapidly adopted by AI platforms like Claude, Gemini, Windows Copilot, and emerging developer tools. Unlike traditional APIs, MCP is designed for AI agents — focusing on task-oriented interfaces, context-rich descriptions, and structured simplicity.
New to MCP?
If you’d like a primer on how MCP works and why it matters, check out my earlier post:
👉 What If AI Agents Had a Universal Connector? Meet MCP
Rewriting legacy systems is costly. But what if you could plug them into modern AI workflows without touching the core code?
MCP is an emerging open standard that acts like a USB-C for AI — it defines how LLMs can connect to external tools, data, and resources. And with it, even the oldest software can become context-aware.
This post shares how I used MCP to: — Turn Redmine’s data into AI-accessible resources — Build a working MCP server in Python — Avoid rewriting any part of the legacy system
🛠️ The Setup: Redmine, MCP, and FastAPI
Instead of writing custom plugins, I used:
- FastAPI as the server framework
- FastMCP to handle the Model Context Protocol logic
- python-redmine to connect with Redmine projects, issues, and wikis
- Server-Sent Events (SSE) to enable real-time communication with LLM agents
Here’s how it fits together:
[Redmine] → [python-redmine] → [FastAPI + FastMCP] → [LLM Agent]
💡 What the MCP Server Can Actually Do
This server doesn’t just expose raw data — it defines two task-specific tools for LLMs to interact with Redmine in a structured, safe way:
🧰 Available MCP Tools:
get_redmine_issue(issue_id: int)— Fetches a specific issue with full metadata (project, status, author, assignee, timestamps)list_redmine_projects()— Returns all visible Redmine projects with names, identifiers, and descriptions
Both tools are registered using FastMCP and return clean JSON responses designed for LLM parsing. You can see the full payload examples in the GitHub repo’s README.
These aren’t just raw API calls — they’re goal-oriented tools tailored for AI agent consumption. They’re auto-discoverable through the /sse endpoint and follow the MCP registration spec.
⚙️ Deploy the Server (Dev or Docker)
You can run the server in two ways: locally with Python, or using Docker for a production-like setup.
🐍 Development Mode (Python)
uv venv
source .venv/bin/activate
uv pip install -e .
uv run fastapi dev src/redmine_mcp_server/main.py
uv is a fast Python package manager (a drop-in for pip + venv)
🐳 Docker Deployment (Recommended)
cp .env.example .env.docker # Fill in Redmine URL and credentials
docker-compose up --build
Once the server is running, your MCP-compatible agent (e.g., Cursor, VS Code) can connect via: http://localhost:8000/sse
🧩 What This Unlocks
Thanks to MCP, AI agents can now: — Read issues and wiki pages as structured resources — Search and retrieve project history — Act on legacy data with context — without needing custom plugins
In short: your existing system becomes an AI-aware source of truth.
🧪 Avoiding the OpenAPI Trap
While Redmine offers a REST API, I chose not to auto-generate tools from it. That approach usually creates a large, unfocused set of operations — which can overwhelm LLMs and reduce performance.
Instead, I exposed a curated set of read-only resources: list projects and issues. Each one includes a clear name, type, and description — designed with LLM comprehension in mind.
This decision kept the interface small, purposeful, and easy to reason about.
🛠️ What’s Next: Designing Tools for AI Agents
Right now, this MCP server focuses on exposing read-only Redmine content. But MCP isn’t limited to just resources.
In future iterations, I’m planning to add task-oriented tools — such as: — Summarizing recent issues — Generating weekly digests from wiki edits — Drafting release notes from resolved tickets
These tools will be purpose-built for LLMs — with example-rich prompts and goal-specific actions.
💬 Final Thoughts
Building for MCP isn’t just about wiring up endpoints. It’s about shifting your mindset to design for LLMs — creating clear, context-rich, and task-based interfaces.
If we do it right, even the most legendary systems can become AI-native.
🚀 Try It Yourself
Curious about connecting your own legacy tools to AI agents?
Fork the MCP server repo, plug in your Redmine (or similar system), and let your AI agent start asking smarter questions.
If this sparked an idea, hit the 👏 or drop a comment — I’d love to hear what legacy systems you’re bringing into the AI loop.
Tags : #AI #OpenSource #MCP #LegacySoftware #LLMIntegration #Agents