Skip to main content

MCP Solves Connectivity. It Doesn't Solve Governance. Here's the Difference.

The Model Context Protocol standardises how AI agents discover and call tools. But the protocol says nothing about who is allowed to call what, at what rate, with what audit trail. That part is still your problem.

  • mcp
  • governance
  • ai
  • security
  • compliance
Zerq team

MCP has become the default protocol for connecting AI models to external tools. Every major AI assistant, IDE extension, and agentic framework now speaks it. That standardisation is genuinely valuable: instead of a bespoke connector per tool, you have one protocol, one discovery mechanism, one invocation pattern.

But standardising the connection layer is not the same as governing access. And the confusion between the two is already creating security debt.

What MCP actually defines

The Model Context Protocol specifies three things:

  • Discovery: how a client asks "what tools exist here and what are their schemas?"
  • Invocation: how a client calls a tool and receives a structured response
  • Transport: the message framing over SSE or stdio

That is it. The spec is deliberately narrow. It does not define:

  • Which clients are allowed to call which tools
  • How many calls a client can make per minute
  • What a complete audit log of tool invocations should contain
  • How you revoke a client's access without restarting the server
  • How you enforce RBAC across a catalog of tools owned by different teams
  • How you prove to a compliance auditor that only authorised agents accessed sensitive operations

This is not a gap in the spec — it is an intentional boundary. The protocol team correctly decided that transport and governance are different layers. The problem is that most teams deploying MCP servers treat "the agent can connect" as equivalent to "the agent has governed access."

The four governance gaps MCP leaves open

1. Authentication is optional and implementation-defined

The MCP spec supports auth but does not mandate a specific mechanism. In practice, deployed MCP servers fall into three buckets: no auth, static API key, or bespoke token scheme. The 2025 audit of 5,200 public MCP server configurations found 53% using static keys. Static keys do not expire, do not scope, and do not produce per-client identity in logs.

Connecting to a tool server is not the same as proving your identity to an authorisation policy that knows which scopes you hold.

2. Tool-level authorisation is not in the protocol

An MCP server exposes a list of tools. The protocol has no concept of "this client is allowed to call tool A but not tool B." You either connect or you don't. Any per-tool access control has to be built on top, outside the protocol — typically as custom middleware inside each server.

That means every MCP server author independently deciding how to enforce authz. In practice, most do not enforce it at all during early development. That pattern calcifies.

3. The audit log format is undefined

MCP does not specify what an audit record looks like, what fields it must contain, or where it goes. Some server implementations emit structured logs; many emit nothing beyond a stdout line. When a compliance team asks "show me every tool call made by this AI agent on this date, with the identity that made it and the response it received," the answer from a raw MCP deployment is typically: we don't have that.

4. Rate limiting and quota enforcement are absent

The protocol has no throttling primitives. Clients can call tools as fast as the server accepts connections. For tools that call paid APIs, write to databases, or trigger business workflows, an unthrottled agent is an operational risk — not just a cost risk.

Why this gap matters more than it looks

Each governance gap in isolation is manageable. Together, they combine into a pattern familiar from earlier eras of API proliferation: shadow integrations with no access controls, no audit trail, and no defined owner.

The specific risks for MCP deployments:

Compliance evidence failures. Regulated industries need audit trails at the tool invocation level — not just "an AI call came in" but "this specific agent identity called this specific tool with these parameters and received this response." MCP alone does not produce that record.

Access review gaps. Your quarterly access review covers human users and service accounts. MCP client credentials often exist outside that process — provisioned for a pilot, never added to the identity lifecycle, never deprovisioned when the pilot ends.

Blast radius on credential compromise. A static MCP key that authenticates to all tools on a server means compromising one credential compromises all tool access for every agent that uses that key. There is no per-tool revocation path.

Incident response blindspot. When an AI agent causes an unexpected action downstream — creates a record it shouldn't, calls a paid API repeatedly, triggers a workflow — the question is immediately "which agent, which session, what exactly did it do?" Without structured per-invocation logs at the MCP layer, that question takes hours to answer from scattered application logs.

What governance at the MCP layer actually requires

Filling the gap does not require replacing MCP — it requires putting a governance layer in front of MCP servers that speaks the same protocol downstream. That layer needs to handle:

Identity and authorisation

  • Validate client identity against your IdP on every request, not just at connection time
  • Enforce per-client, per-tool access policies based on scopes — the same model you already use for REST API access
  • Support short-lived credentials with automatic expiry so there is a natural remediation path when an agent is decommissioned

Rate limits and quotas

  • Per-client and per-tool call budgets enforced at the gateway, not inside each server
  • Burst handling that distinguishes a runaway agent from a legitimate spike

Structured audit records

  • Every tool invocation recorded with: client identity, tool name, input parameters, response status, timestamp, session correlation ID
  • Log format compatible with your SIEM so alert rules can be written against it without custom parsing

Access reviews

  • MCP client credentials managed in the same lifecycle as service accounts — provisioned with an owner, included in quarterly reviews, deprovisioned on offboarding

The protocol and the governance layer are not the same thing

MCP is a transport standard. It is excellent at what it does. But "the agent can talk to the server" is the beginning of the story for any production deployment in a regulated environment, not the end.

The governance layer — identity, authorisation, rate limits, audit, lifecycle — lives above the protocol. It is the same layer you built for REST APIs over the last decade. The architecture question for 2026 is whether you build it again from scratch for every MCP server, or whether you run it once at the API layer and let every MCP server inherit it.


If you are planning MCP deployments for production AI agents, Zerq handles the governance layer — the same RBAC, audit trail, and quota controls your REST APIs already use, extended to MCP tool servers without per-server custom code. See how Zerq handles MCP or request a demo.