53% of AI agent integrations use static API keys. Here's what goes wrong — and how to fix it.
Most MCP server deployments hand AI agents long-lived static keys with no rate limits and no audit trail. Here's the security failure pattern — and the architectural fix.
- ai
- security
- mcp
- governance
A security audit of 5,200 MCP servers found that 53% use static API keys or personal access tokens. Only 8.5% use OAuth — the standard that every other enterprise integration has moved to. Researchers found 24,008 secrets in MCP-related configuration files on public GitHub. Of those, over 2,000 were still valid.
This is not a theoretical risk. AI agents are calling production APIs right now, with credentials that never rotate, from processes that no one is monitoring.
Why the MCP security gap exists
The Model Context Protocol exploded in 2025. Over 13,000 MCP servers launched on GitHub that year alone. Teams building them optimised for getting something working — not for production security. The result is a recognisable pattern:
- Developer creates an MCP server to let an AI assistant call internal APIs
- Gives it a long-lived API key or personal access token with broad permissions
- Stores the key in a
.envfile or MCP config - Ships it — no rate limits, no audit log, no RBAC
Six months later, the key has been committed to a repo, shared with three contractors, and is sitting in a config file on four laptops. Nobody knows which AI tools are using it or what they have done with it.
What "the same front door" means in practice
The architectural fix is not a new security tool. It is applying the governance model you already use for every other integration:
Same identity. An AI agent should authenticate with the same credentials your mobile app or partner uses — scoped to what it is allowed to do, not a god-mode key. That means client IDs and profiles, not static tokens with full access.
Same rate limits. A runaway agent loop can call an API thousands of times a minute. Your upstream services were not designed for that. Rate limits at the gateway — applied equally to AI and REST traffic — protect them.
Same audit trail. When something goes wrong, you need to answer: which agent called which endpoint, with what parameters, at what time? That answer lives in your gateway logs — not in a separate AI observability tool.
Same RBAC. If your junior admin cannot delete a collection, neither should an AI agent running under a shared admin key. Role-based access applies to every caller, including automated ones.
The two failure modes to avoid
Failure mode 1: separate AI gateway. Teams stand up a second gateway specifically for AI traffic — different credentials, different logs, different policies. Now your SOC is watching two streams. Compliance teams cannot produce a unified audit narrative. Policies drift. This is the "parallel stack" problem in a new form.
Failure mode 2: no gateway at all. AI tools call upstream services directly, bypassing the gateway entirely. There are no rate limits, no logs, no enforcement. You only discover this during an incident or an audit.
Both failures share a root cause: the assumption that AI traffic is somehow different from API traffic. It is not. It is HTTP calls with credentials. Treat it that way.
What governed AI agent access looks like
A well-governed AI integration routes every agent call through the same gateway as your other API consumers:
- The agent authenticates with a client ID and scoped profile — not a root key
- Every request goes through the same rate limiting and quota enforcement
- Logs land in the same structured log pipeline your SIEM already reads
- A read-only audit role lets compliance teams review agent activity without touching production config
- Credential rotation happens through the same lifecycle process as every other credential
The agent is just another API consumer. It gets the appropriate access level. Nothing more.
The governance question to ask now
Before your next AI integration goes to production, ask: if this agent were a third-party contractor, would you give it unrestricted access with a non-rotating key and zero audit trail?
If the answer is no, the fix is routing it through the same front door everything else uses.
Zerq routes AI agent traffic through the same gateway, access control, and audit pipeline as your REST consumers. See how AI agent access works in Zerq or request a demo to walk through your specific agent and API setup.