Skip to main content

28 million secrets leaked on GitHub in 2025. Yours might be next — here's the gateway fix.

Hardcoded API credentials, upstream keys in config files, and shared tokens with no rotation cycle are the leading cause of API breaches. Here's how a gateway-first approach eliminates the risk.

  • security
  • api-management
  • secrets-management
  • credentials
Zerq team

GitGuardian's 2026 State of Secrets Sprawl report found 28.65 million new hardcoded secrets added to public GitHub repositories in 2025 — a 34% increase over the prior year. API keys, database credentials, webhook tokens, upstream service passwords.

The cause is not careless developers. It is a structural problem: secrets have to live somewhere, and without a clear system for where that is, they end up where it is most convenient — config files, .env files, container images, CI/CD pipelines, and eventually, git history.

AI coding tools made it worse. Commits generated with AI assistance had a 3.2% secret-leak rate — more than double the 1.5% baseline across all public commits.

Where gateway secrets actually live (and where they should not)

Most API gateways need credentials to function. Upstream service keys. OAuth client secrets. Database passwords for audit log persistence. mTLS certificates for high-assurance partners. Partner API tokens.

In a typical deployment, these end up in one of three places:

Config files checked into version control. The most dangerous pattern. Even in private repos, a mis-scoped access token or an accidental fork can expose everything. Git history means deletion is not enough.

Environment variables set at deploy time. Better than config files, but only if the values are actually injected from a secret store — not copy-pasted into a Dockerfile or a CI/CD pipeline YAML that gets committed.

Shared spreadsheets or wikis. More common than anyone admits. A partner sends credentials over email. Someone puts them in Confluence. Three engineers have them. Nobody knows which systems actually use them.

The three properties a credential system needs

1. Credentials are never in configuration. The gateway should reference credentials by name — upstream-payment-service-key — and resolve the value at runtime from a secure store. If an attacker gets your gateway config, they get a reference, not a secret.

2. Every credential has a rotation lifecycle. A credential that never rotates is a breach waiting to happen. The system should support issuing a new value, updating it in the store, verifying it works, and retiring the old one — without downtime and without a manual process that gets skipped under pressure.

3. Access is per-credential, per-consumer. A partner credential should be scoped to that partner. An upstream service key should be used only by the workflows that need it. Broad, shared credentials — one key for all upstream calls — mean a single exposure compromises everything.

Secrets sprawl is an architecture problem

The solution is not a reminder in your onboarding docs that says "don't hardcode credentials." The solution is an architecture where there is no place to put a hardcoded credential — because the system is designed to never need one.

That means:

  • Gateway credentials live in a credential store, not in config files or environment variables that land in git
  • Upstream service keys are referenced by name in workflow and proxy configuration, resolved at request time
  • Partner credentials are per-partner, scoped to their access level, and rotated through the same lifecycle as every other credential
  • CI/CD pipelines inject secrets from Vault or a managed secret store — not from variables defined in the pipeline YAML

When this is the default, hardcoded secrets stop being a developer discipline problem and become an architecture impossibility.

The specific risk of AI-generated configuration

The 2× leak rate in AI-assisted commits is worth understanding. AI coding tools suggest code that works — but they complete patterns based on what they have seen. If they have seen .env files with API keys, they suggest .env files with API keys. If they have seen secrets in Kubernetes manifests, they suggest secrets in Kubernetes manifests.

This is not a failure of the AI tool. It is a failure of the surrounding architecture. If the correct way to reference an upstream credential is ${UPSTREAM_KEY} or a named reference to a credential store, AI tools will learn to suggest that — but only if that is the pattern in your codebase.

The teams that are most resilient to the AI-assisted credential leak problem are those with a clear, enforced architecture where secrets simply cannot appear in configuration — not as a policy, but as a structural property.

What to audit right now

Before your next deployment, check:

  • Are any upstream service credentials written directly in gateway configuration files?
  • Are environment variables that contain secrets defined in files that get committed?
  • Does every partner credential have a defined rotation schedule — and was the last rotation actually performed?
  • Can you list every system that uses each credential, and revoke one without breaking everything else?

If any of those answers are unclear, the credential architecture needs attention before the next incident makes it urgent.


Zerq stores credentials in an encrypted credential store, references them by name in proxy and workflow configuration, and supports rotation without downtime. See how credential management works or request a demo to review your current credential architecture.