Your API Gateway Is Your AI Gateway: Connect Claude, Cursor, and ChatGPT to Your APIs with MCP
Stop building a second AI gateway. Claude, Cursor, ChatGPT, and any MCP-compatible client can call your existing enterprise APIs through the same gateway, credentials, and audit trail you already have — via MCP.
- mcp
- ai-agents
- api-gateway
- ai-gateway
- claude
- cursor
- chatgpt
- api-security
- developer-experience
AI tools — Claude in your IDE, Cursor suggesting code, ChatGPT embedded in internal tooling, autonomous agents running production workflows — all need to call your APIs. Not your public docs. Your live APIs. The same ones your mobile apps call, your partner integrations depend on, and your compliance team audits.
The question is not whether AI clients will call your APIs. They already are. The question is whether that access goes through your governance layer — or around it.
The problem: AI clients have no first-class seat in your gateway
Every enterprise API gateway today was designed for one type of client: a human-built application. An app authenticates with a key or token, calls an endpoint, gets a response. Simple.
AI clients break that model in three specific ways.
They can't read a README. A human developer integrating with your Payments API reads the documentation, understands the schema, handles errors. Claude, Cursor, or a ChatGPT-connected agent needs a machine-readable way to discover what APIs exist and how to call them — at runtime, not at build time.
They can't be given a single long-lived key. When you hand an AI agent a key with broad access, you have no control over what it calls, no way to scope it to specific operations, and no practical rotation path. The blast radius of a compromised credential covers everything that key can reach.
Standard audit logs don't capture them. When an AI agent calls GET /orders/45821, your current audit log shows a key ID. It doesn't show that the caller was Claude, which conversation triggered the call, or what the agent was trying to accomplish. For a compliance team trying to produce evidence of what AI clients accessed in a 90-day window, that log is useless.
The typical response is one of three workarounds:
-
Block AI tools entirely. Sounds safe. In practice, engineers find another path — direct calls to upstream services, shared credentials over Slack, scripts that bypass the gateway completely. Blocking creates invisible access, which is worse than governed access.
-
Issue a long-lived API key for the AI tool. Fast to ship, impossible to maintain. No scoping, no rotation, no meaningful audit trail. The key persists indefinitely and accumulates risk.
-
Stand up a separate "AI endpoint." Two gateways. Two access control models. Two audit logs. Two things that can be misconfigured. The AI endpoint typically has lighter security because it's "just internal." Auditors don't see one clean picture — they see gaps.
None of these is a governance model. They're workarounds.
The right architecture: your API gateway is your AI gateway
AI tools are API clients. That's the frame that makes the solution obvious.
A support chatbot calling GET /orders/{id} is doing exactly what your mobile app does. The technical operation is identical. The governance requirements are identical: authentication, authorization, rate limiting, and a structured record of what happened.
The correct architecture doesn't create a special path for AI clients. It extends your existing governance layer to cover them — with the same controls, the same credentials model, and the same audit trail.
What AI clients need that traditional clients don't is a discovery protocol: a machine-readable way to find out what APIs are available, what they expect, and how to call them correctly. That protocol is MCP — the Model Context Protocol.
MCP is a standard that lets AI tools discover and call APIs. It's supported natively by Claude Desktop, Cursor, and ChatGPT (via custom integrations and plugins), along with dozens of other AI clients and agent frameworks. When your gateway exposes an MCP endpoint, any MCP-compatible client can connect to it — and call your APIs through your existing governance layer.
That's what Zerq Gateway MCP does.
How it works: REST APIs become MCP tools
Zerq Gateway MCP is an MCP-compatible endpoint built into the Zerq API gateway. It exposes your published API catalog as discoverable, callable tools — without any changes to your upstream services.
The connection is made with the same credentials as your existing API clients: a client ID and an access profile. No new credential type. No separate registration. The same self-service flow partners use to get API access is the same flow for AI clients.
The four MCP operations
Every AI client that connects goes through four operations:
list_collections() — returns the API collections visible to the connected access profile. If the profile covers Orders and Payments, those are returned. Nothing else. The AI client has no visibility into APIs outside its scope.
list_endpoints(collection) — returns the endpoints within a collection with descriptions. Claude or Cursor can understand what operations exist and what they do before executing anything. This is how an AI assistant knows to call GET /orders/{id} instead of guessing.
inspect_contract(endpoint) — returns the full schema for an endpoint: parameters, request body shape, response shape, any constraints. This is what Claude uses to construct a valid API call. No hallucinated parameters, no format mismatches.
execute_call(endpoint, params) — makes the actual request through the gateway to your upstream service. Real data, real response, fully governed.
Scope is the access profile
The most important design property: the access profile is the boundary.
An AI client connected with a profile scoped to read operations on the Orders collection can discover Order endpoints, inspect their schemas, and call GET /orders/{id}. It cannot discover your internal admin APIs. It cannot call endpoints outside its profile. It cannot escalate its own permissions.
The model doesn't decide what it can reach. The gateway enforces this at the same layer it enforces everything else — before the request reaches your upstream.
Everything else is the same gateway
Because Gateway MCP routes through the same gateway as all your other clients, every other control applies automatically:
- Rate limits — the same per-profile or per-product limits apply. A runaway agent generating thousands of requests hits the same wall as any other client.
- Authentication — every call is authenticated against the access profile credentials. No anonymous calls, no implicit trust.
- Audit trail — every call is logged to your audit collection in the same format as every other request. AI client calls are in the same place as app calls and partner calls. One query covers everything.
- Credential rotation and revocation — credentials follow your normal lifecycle. Rotate them, the AI client picks up the new credentials. Revoke the profile, access is cut off instantly.
Connecting AI clients: practical setup
Claude Desktop
Claude Desktop supports MCP natively. Add Zerq as an MCP server in Claude Desktop settings:
{
"mcpServers": {
"zerq": {
"type": "http",
"url": "https://your-gateway-host/mcp",
"headers": {
"X-Client-ID": "your-client-id",
"X-Profile-ID": "your-profile-id"
}
}
}
}
Restart Claude Desktop. Open a conversation. Claude will call list_collections() to discover what APIs are in scope. From there it can explore endpoints, inspect schemas, and execute real API calls — entirely within the profile boundary.
What Claude can now do: answer questions about live data, run lookups on behalf of users, draft API integrations using accurate live schemas, and execute workflows that span multiple API calls — all governed, all logged.
Cursor
Cursor supports MCP through its agent and tool framework. Add the MCP server configuration in Cursor settings under the MCP or tools section:
{
"mcp": {
"servers": {
"zerq": {
"type": "http",
"url": "https://your-gateway-host/mcp",
"headers": {
"X-Client-ID": "your-client-id",
"X-Profile-ID": "your-profile-id"
}
}
}
}
}
Once connected, Cursor can use your API catalog to suggest accurate implementations, inspect live endpoint schemas while you write integration code, and run test calls from within the IDE without switching tools.
What Cursor can now do: propose integration code against your real API schemas, catch parameter mismatches before they hit production, and let developers explore your catalog in natural language without leaving their editor.
ChatGPT and other MCP clients
ChatGPT supports external tool connections through custom integrations that can be configured to call an MCP-compatible endpoint. Any MCP client — IDE plugins, autonomous agent frameworks, internal chatbots built on LLM APIs — connects with the same configuration pattern: the gateway URL and the access profile credentials.
The gateway doesn't know or care what MCP client is making the request. It applies the same rules to all of them.
Real use cases
Operations team: live order data in Claude
An operations team member asks Claude Desktop: "What is the current status of order #45821, and has anything changed in the last hour?"
Claude calls list_collections(), finds Orders in scope, calls list_endpoints("Orders"), identifies the right endpoint, inspects the contract, then executes GET /orders/45821. The response comes back with current status, timestamp, and state history. Claude summarises it conversationally.
That call went through the gateway. It was rate-limited, authenticated, and logged. The operations team got real data without opening a dashboard. The audit trail shows exactly what was called and when.
Developer in Cursor: schema-accurate integration
A developer building a new payments integration asks Cursor: "What does the POST /payments endpoint expect?"
Cursor calls inspect_contract("POST /payments") and returns the full schema — required fields, optional fields, validation rules, response shape. The developer gets the live contract from the actual gateway configuration. No stale documentation. No format guessing.
Cursor can then propose integration code that matches the live schema exactly — reducing the back-and-forth of building against docs that may not match production.
Support chatbot: live customer queries
A support chatbot connected to Gateway MCP answers agent questions by calling your APIs directly. Instead of a knowledge base that goes stale, it queries live data.
"Has customer X's refund been processed?" → chatbot calls the relevant endpoint through Gateway MCP → returns the current state.
The chatbot operates on its own access profile, scoped to support-relevant APIs. If a conversation leads somewhere outside that scope, the gateway stops it. The boundary is enforced — not hoped for.
Autonomous agent: multi-step API workflow
An agent running a nightly reconciliation workflow calls several endpoints in sequence: retrieve unprocessed orders, check inventory levels, trigger fulfillment for eligible orders. Each call goes through the gateway. Each call is logged. If the agent exceeds rate limits, the gateway applies backpressure the same way it does for any other client.
The agent's access is scoped to the APIs it needs for reconciliation. It cannot discover or call anything outside that profile.
The "AI Gateway" question
Vendors are starting to position products as "AI gateways" — purpose-built infrastructure specifically for AI clients. The implicit pitch is that AI clients are different enough to need a separate governance layer.
They're not. And separating them creates the exact problem you're trying to solve.
When your AI clients call APIs through a dedicated AI gateway and your apps call APIs through your enterprise gateway, you have two audit trails, two access control models, two rate limiting configurations that don't coordinate, and two things that can be misconfigured independently. Compliance teams see a partial picture. Security teams have double the surface area.
The right architecture is one gateway that handles all client types — apps, partners, and AI — with the same controls and one audit trail. AI clients get discovery capabilities (via MCP) that traditional clients don't need. Everything else is the same.
Your API gateway already is your AI gateway. It just needs MCP.
What you do not need to build
Zerq Gateway MCP is part of Zerq. If you are running Zerq, the MCP endpoint is already available. You don't need to:
- Deploy a separate MCP server
- Build a separate authentication system for AI clients
- Maintain a second set of credentials for AI tools
- Create a separate logging pipeline for AI-originated calls
- Stand up additional infrastructure
The access profiles you use for your partner integrations work for AI clients. The audit trail you already have captures AI calls. The rate limits you have configured apply. There is no "AI layer" to build — only a client to connect.
The security model
The access profile is the boundary. AI clients — Claude, Cursor, ChatGPT, any agent — see only what their profile allows, call only what their profile permits, and every call is logged in the same place as every other request.
There is no special trust for AI clients. There is no weaker enforcement for AI clients. The gateway does not know or care that the client is an AI model — it applies the same rules it applies to everything else.
That is the correct architecture for AI API access in a regulated enterprise.