Why I Built routermcp: Sharing MCP Tools Across Agents Without Losing My Mind

• By RouterMCP Team

Why I built routermcp: sharing MCP tools across agents without losing my mind

I really like the idea behind the Model Context Protocol (MCP).

In theory, it gives us a kind of USB‑C for tools — a standard way for agents and IDEs to talk to servers that expose capabilities like:

  • running shell commands
  • reading/writing files
  • talking to external APIs
  • querying databases

In practice… my setup quickly turned into a pile of glue scripts, duplicated configs, and "why is this server not reachable from here?" debugging sessions.

That’s how routermcp was born.

This post is the story behind it: the pain points, what routermcp actually does, and where it still needs work.


The problem: MCP is great, but my setup was a mess

Here’s what my day‑to‑day looked like once I started seriously using MCP:

  • I had multiple MCP servers running: some official ones, some open source, some hacked together for specific projects.
  • I used multiple MCP clients/agents: Cursor, Claude Code, Codex, and some custom scripts.
  • A few of the servers I liked were STDIO‑only, and my infra was increasingly HTTP‑centric (server‑side automation, browser automation, etc.).
  • Every new server meant editing multiple configs in slightly different formats across tools.
  • Once a client was connected to a server, it basically saw all tools that server exposed — even the ones I didn’t want it to touch.

None of these individually is catastrophic, but together they felt like early Docker days: everyone running containers, everyone hacking their own scripts and wrappers, nobody happy about it.

I started to notice some recurring patterns in my friction.

Pain #1: STDIO in an HTTP world

MCP is designed around STDIO, and that’s great for:

  • running tools locally,
  • keeping things simple,
  • and letting MCP servers stay language‑agnostic.

But my real workloads started to drift toward environments that speak HTTP by default:

  • running MCP‑powered workflows on a remote server,
  • spinning up browser automation (Puppeteer / Playwright / Chrome) in containers or in the cloud,
  • exposing functionality to internal services that are already HTTP‑first.

Bridging STDIO‑only MCP servers into those environments meant:

  • writing one‑off bridges and proxies,
  • thinking about process lifecycles and IO streaming over and over,
  • lots of “it works on localhost but not in this container” moments.

I kept thinking: "Why am I re‑implementing this glue every time?"

Pain #2: Config explosion across agents

Next headache: configuration sprawl.

Each MCP‑aware client wants its own config:

  • Cursor wants MCP servers defined in its own way.
  • Claude Code has its own configuration surface.
  • Other agents / frameworks have theirs.

Add one new MCP server? Cool, now go update:

  • Cursor config
  • Claude Code config
  • Codex config
  • your own scripting harness

And each of those:

  • might require slightly different paths,
  • might need different environment variables,
  • might not support all the same options.

It’s nobody’s fault — it’s just what happens when you bolt an evolving protocol onto multiple independent tools.

But as a user, all I saw was copy‑paste work and subtle inconsistencies.

Pain #3: No central "tool firewall"

Another subtle but important problem: once a client connects to an MCP server, it can typically see all the tools that server exposes.

Sometimes that’s fine. Other times, I wanted:

  • an agent that can use filesystem tools but not shell,
  • a read‑only agent that can only call certain APIs,
  • a UI where I could say "this group of tools is safe for this agent; everything else is invisible".

I didn’t want to clone or fork servers just to "hide" tools. I wanted something like a small firewall:

This client → these servers → only these tools

The protocol has the pieces to make that work — but again, the missing part was glue.

Pain #4: The copy‑paste smell

The final straw was the classic developer smell: I kept copy‑pasting.

  • Same server definitions across multiple configs.
  • Same environment variables.
  • Same notes in README files explaining how to wire things together.

When I catch myself copy‑pasting the same wiring over and over, it usually means "this should be an actual component".

In this case, the missing component was obvious in hindsight:

I needed a router for MCP.


Enter routermcp

So I built routermcp as a small, focused layer on top of MCP.

It’s not an agent framework. It’s not a hosting platform. It’s a router that sits between your agents and your MCP servers and tries to make the annoying parts go away.

At a high level, routermcp does four things:

  1. Wraps STDIO‑based MCP servers in HTTP.
  2. Aggregates multiple MCP servers behind a single endpoint.
  3. Lets you filter which tools are visible to which clients.
  4. Provides a CLI wizard so you don’t have to hand‑edit configs.

Let’s unpack that a bit.


What routermcp does (in practical terms)

1. STDIO → HTTP bridge for MCP servers

Routermcp lets you run your MCP servers the way they were designed — as STDIO processes — while exposing them as an HTTP/SSE endpoint.

In practice, that means:

  • Your backend MCP servers still speak STDIO.
  • routermcp runs as an HTTP server.
  • It translates between HTTP requests/events and the underlying STDIO streams.

Why this is useful:

  • You can run MCP‑powered workflows from environments that expect HTTP.
  • It’s easier to integrate with server‑side automation, browser automation, and other HTTP‑native systems.
  • You don’t have to modify the MCP servers themselves.

I wanted to treat MCP servers like any other internal service: make a request, stream responses, log things, and manage them via standard infrastructure.

2. Aggregate multiple MCP servers into one endpoint

Instead of each agent connecting to multiple MCP servers directly, routermcp acts as a hub:

  • Each backend MCP server is registered with routermcp.
  • Agents (Cursor, Claude Code, Codex, etc.) connect to one MCP endpoint: routermcp.
  • routermcp then fans out the requests to the appropriate underlying server based on configuration.

From the point of view of your agent:

"There is just one MCP server I talk to. It happens to expose a bunch of tools."

From your point of view:

"I can plug in or unplug individual MCP servers behind the scenes without touching every client config."

This is the part that directly attacks config explosion.

3. Tool‑level allow/deny routing

Routermcp doesn’t just aggregate servers, it also lets you control which tools are visible to which clients.

Think of it as a little policy layer:

  • You define which MCP servers exist.
  • You define which tools from those servers are exposed.
  • You can scope that exposure per client or per environment.

Some examples of how I use this:

  • A "safe" profile for agents that should never be able to run shell or touch production.
  • A "read‑only" profile where only data‑fetching tools are visible.
  • A "local dev" profile that exposes everything for experimentation.

Instead of forking MCP servers or forking configs per agent, routermcp lets me centralize that logic.

4. CLI configuration wizard

I also didn’t want to write another wall of JSON just to get started.

So routermcp ships with a CLI configuration wizard.

At a high level, it works like this:

  1. You run something like routermcp init.
  2. The CLI asks which MCP servers you want to register.
  3. For each server, it fetches the tools it exposes.
  4. You select which tools should be visible for which clients / environments.
  5. It writes out the config for you.

You can still hand‑edit the config file if you like, but the wizard makes it easy to:

  • experiment,
  • toggle tools on or off,
  • and avoid subtle typos.

5. One MCP config for many agents

Put all of this together and you end up with a nice property:

All of your MCP‑aware agents talk to routermcp, and routermcp talks to your actual MCP servers.

That gives you:

  • Single source of truth for MCP configuration.
  • Consistent tool visibility across agents.
  • Less duplication when you add, remove, or modify a server.

Instead of:

  • adding a server → editing 3–4 configs → hoping you didn’t miss one,

You can:

  • add a server to routermcp,
  • adjust its tool exposure,
  • restart/reload routermcp,
  • and all your agents see the new, clean view.

How routermcp works (high‑level architecture)

Here’s a simplified view of what’s happening under the hood.

Components

  • Agent / Client – Cursor, Claude Code, Codex, or any MCP‑aware tool.
  • routermcp – a single MCP endpoint implementing the protocol over HTTP/STDIO.
  • Backend MCP servers – the actual servers that implement tools: filesystem, shell, APIs, etc.

Flow

  1. An agent connects to routermcp as if it were a normal MCP server.

  2. routermcp loads its configuration:

    • which backend MCP servers exist,
    • which tools they expose,
    • which tools are allowed per client/environment.
  3. When the agent lists tools, routermcp returns a filtered, synthetic list based on that config.

  4. When the agent calls a tool, routermcp:

    • determines which backend server owns that tool,
    • checks whether the call is allowed,
    • forwards the request over STDIO to the appropriate server,
    • streams the response back to the agent.

This design keeps routermcp relatively simple:

  • It doesn’t need to know how tools are implemented.
  • It only needs to know where a tool lives and whether it’s allowed.
  • The heavy lifting stays in the MCP servers themselves.

What routermcp is not

Any time you add a layer to a stack, there’s a risk of over‑promising.

So here’s what routermcp is not trying to be:

  • It’s not a full "MCP cloud platform" with multi‑tenant hosting, dashboards, and billing.
  • It’s not an AI agent framework with planners, memories, and orchestration.
  • It’s not a security product (yet) — it’s a routing and configuration tool.

Routermcp is deliberately narrow:

it focuses on routing, configuration, and visibility between agents and MCP servers.

I’d rather have it do those few things well than pretend to solve everything.


Rough edges and open questions

Routermcp is still evolving, and there are plenty of rough edges.

Some of the things I’m actively thinking about:

  • Authentication and multi‑tenant setups

    • Right now, routermcp assumes a relatively trusted environment.
    • There’s more work to do around auth, per‑user scoping, and proper isolation.
  • Performance and latency

    • Every extra hop adds some overhead.
    • The STDIO↔HTTP bridge needs profiling and tuning under real workloads.
  • Failure handling

    • What happens when a backend MCP server crashes mid‑request?
    • How should routermcp handle retries, backoff, and health checks?
  • Configuration UX

    • The CLI wizard is a good start, but power users may want richer ways to define policies.
    • I’m exploring how to keep configs expressive without becoming a full DSL.

I’m intentionally keeping the scope modest for now so that it remains understandable and hackable.


Who might find this useful?

You might find routermcp useful if:

  • You run multiple MCP servers and are tired of wiring them separately into each client.
  • You juggle multiple agents/IDEs (Cursor, Claude Code, Codex, custom tools) and want a single MCP endpoint to point them at.
  • You’d like a tool‑level firewall to decide which tools are visible where.
  • You want to use STDIO‑based MCP servers from HTTP‑centric environments.

If you’re in one of those buckets, you’re basically the target user I had in mind when building this.


Try it out

If you want to try routermcp:

  1. Install it (instructions in the repo).
  2. Run the CLI wizard to register your existing MCP servers.
  3. Point your agents at routermcp instead of directly at each server.
  4. Experiment with turning tools on/off and observing what changes.

I’d love to hear how it behaves in real setups that are more chaotic than mine.


Feedback very welcome

I built routermcp because I was tired of:

  • juggling configs,
  • debugging STDIO bridges,
  • and worrying which agent was talking to which tools.

It already makes my own stack nicer to live with, but I’m very aware that:

  • my use cases are just a subset of what people are doing with MCP,
  • there are probably better ways to structure some parts of this.

If you do try routermcp, I’d love feedback on things like:

  • What felt confusing or under‑documented?
  • What’s missing for you to use this in a serious environment?
  • What would make it safer, easier, or more powerful without turning it into a monster?

Feel free to open issues, send PRs, or just tell me what’s broken.

Either way, I hope this post gave you a decent picture of why routermcp exists — and whether it might make your MCP life a little less painful.