
Most enterprises can’t rewrite their core systems. But they still need those systems to participate in AI workflows. This post shows how I turned Redmine into an AI-ready platform using MCP, without touching the original codebase.
I needed to give an AI agent access to tickets and project data stored in Redmine, a system that still powers serious project work across many organizations. The catch: rewriting it was off the table. Wrapping it in brittle custom APIs wasn’t much better.
Instead, I built an MCP server that acts as a capability membrane between the legacy system and the LLM. The agent gets structured, task-oriented access. Redmine stays untouched. Zero changes to the original platform.
Why MCP (And Not Just Function Calling)
If you’re building AI integrations today, you have two main options: function calling or MCP. Function calling defines what an agent can do. MCP defines what systems it can connect to.
The distinction matters for legacy systems. Function calling couples your tool definitions to a specific LLM provider. MCP gives you a portable, provider-agnostic interface that any MCP-compatible client (Claude, Gemini, Cursor, VS Code) can discover and use.
For this project, MCP was the right choice because:
- The integration needed to work across multiple AI clients
- I wanted a clear separation of concerns between the LLM’s capabilities and Redmine’s data
- Read-only access had to be enforced at the boundary, not trusted to the agent
New to MCP? Check out my primer: What Is MCP? The Universal Connector for AI Agents
Architecture: The Capability Membrane
The design principle is simple: the MCP server is a read-only membrane that sits between the LLM and the legacy system. It controls exactly what the agent can see and do, without modifying anything upstream.
Traditional Integration:
LLM → Full REST API → Legacy System (unbounded access)
Capability Membrane Pattern:
LLM → MCP Server (curated tools) → Legacy System (controlled access)
In practice, the stack looks like this:
[Redmine] → [python-redmine] → [FastAPI + FastMCP] → [LLM Agent]
The stack:
- FastAPI as the server framework
- FastMCP to handle MCP protocol logic
- python-redmine to connect with Redmine’s existing API
- Streamable HTTP for agent communication
No plugins, no schema migrations, no changes upstream. The MCP server owns the entire integration surface.
Designing the Tool Surface
This is where most MCP tutorials get it wrong. They expose every API endpoint as a tool and call it done. That approach overwhelms the LLM with options and degrades reasoning quality.
Instead, I deliberately reduced the capability surface to just two tools:
get_redmine_issue(issue_id: int)- Fetches a specific issue with full metadata (project, status, author, assignee, timestamps)list_redmine_projects()- Returns all visible projects with names, identifiers, and descriptions
That’s it. Two tools. Both read-only. Both return clean JSON designed for LLM parsing, not raw API responses.
The full payload examples are in the GitHub repo’s README.
Why so few? Because a smaller, curated tool surface means the agent reasons better about when and how to use each tool. Every tool you add is cognitive overhead for the LLM. Once I dropped from a full API mirror to just two tools, the agent stopped hallucinating unsupported operations and tool selection became near-deterministic.
Deploy the Server (Dev or Docker)
You can run the server in two ways: locally with Python, or using Docker for a production-like setup.
Development Mode (Python)
uv venv
source .venv/bin/activate
uv pip install -e .
uv run fastapi dev src/redmine_mcp_server/main.py
uv is a fast Python package manager (a drop-in for pip + venv)
Docker Deployment (Recommended)
cp .env.example .env.docker # Fill in Redmine URL and credentials
docker-compose up --build
Once the server is running, your MCP-compatible agent (e.g., Cursor, VS Code) can connect via: http://localhost:8000/mcp
Why This Matters in Production
In most enterprises, the legacy system is the system of record. Replacing it introduces more risk than value. The question isn’t whether to replace it. It’s how to make it participate in AI workflows without destabilizing what already works.
MCP makes this possible by acting as an architectural boundary:
- The legacy system stays stable. No code changes, no new plugins, no schema modifications.
- The AI agent gets structured access through a well-defined interface.
- The MCP server enforces constraints (read-only, curated tools) that the agent cannot bypass.
Your existing system becomes an AI-aware source of truth, without ever knowing it.
Avoiding the OpenAPI Trap
Redmine has a REST API. The tempting shortcut is to auto-generate MCP tools from the OpenAPI spec. I tried this early on and abandoned it quickly.
The problem: auto-generated tools create a large, unfocused set of operations. The LLM sees dozens of endpoints and has to reason about which ones to use. Performance drops. Hallucinated tool calls increase.
The fix: curate aggressively. I exposed two read-only tools with clear names, typed parameters, and context-rich descriptions. Each one was designed with LLM comprehension in mind, not developer convenience.
What I’d Do Differently Now
Since building this server, I’ve learned a few things the hard way:
- Start with two tools, not ten. You can always add more. You can’t easily undo a bloated tool surface once agents depend on it.
- Read-only first. Write operations introduce a whole class of safety concerns. Start read-only, prove the value, then expand deliberately.
- Design for the LLM, not the developer. Tool descriptions, parameter names, and return schemas should optimize for agent comprehension, not API conventions.
For a deeper comparison of when MCP makes sense versus function calling, see MCP vs Function Calling: Which Should Your AI Agent Use?
Final Thoughts
Building for MCP isn’t just about wiring up endpoints. It’s about shifting your mindset to design for LLMs, creating clear, context-rich, and task-based interfaces.
After experimenting with function calling wrappers, OpenAPI auto-generation, and custom REST adapters, this capability membrane pattern is the safest approach I’ve found for legacy AI integration. MCP gives you a clean boundary between systems that must stay stable and agents that need to move fast. Get that boundary right, and even the oldest systems can become AI-native.
This same pattern now powers my Redmine MCP server, pdf-mcp, and other zero-touch integrations.
Try It Yourself
Curious about connecting your own legacy tools to AI agents?
Fork the MCP server repo, plug in your Redmine (or similar system), and let your AI agent start asking smarter questions.
Discussion
Comments are powered by GitHub Discussions. Sign in with GitHub to join the conversation.