Skip to main content

38% of Organizations Learn About Their API Breaches From Outsiders

More than a third of enterprises only discover API breaches through external notification. The detection gap is not a monitoring tool problem. It is a gateway architecture problem — one that starts with what your request logs actually contain.

  • api-security
  • observability
  • governance
  • compliance
  • audit
Zerq team

The statistic that should be stopping enterprise security conversations cold: according to the 2025 State of API Security report from Traceable, 38% of organizations discovered their API breach only after external notification — a partner, a researcher, a regulator, or the attacker themselves making demands. Internal systems detected nothing until someone else told them.

That number does not mean those organizations had no security tooling. Most of them had firewalls, WAFs, API gateways, and SIEM infrastructure. It means their gateway architecture was not instrumented to produce the signals that would have triggered internal detection.

The same report found that only 21% of organizations report a strong ability to detect API attacks, and only 13% can prevent more than 50% of API attacks. These are not outliers. They describe the baseline capability of enterprise API security programs today.

What the current data says about the underlying exposure

The 2026 API ThreatStats report from Wallarm documents the scale of the problem: APIs accounted for 11,053 of 67,058 published security bulletins in 2025 — 17% of all reported vulnerabilities — and 43% of newly added CISA Known Exploited Vulnerabilities were API-related. A February 2026 BusinessWire release summarizing the research was headlined: "New Research Reveals APIs are the Single Most Exploited Attack Surface."

The 42Crunch State of API Security 2026 report sharpens the picture further: 97% of API vulnerabilities can be exploited with a single request, and nearly 99% are remotely accessible. There is no multi-stage attack chain to detect. There is one call, it succeeds, and unless your logging captured it with the right context, it is invisible in your records.

IBM's 2025 Cost of a Data Breach report puts the average breach cost at $4.44 million globally — $10.22 million for US-based organizations — with API downtime costing enterprises approximately $300,000 per hour in lost revenue. Those costs scale directly with detection latency. The earlier the detection, the shorter the dwell time, the narrower the blast radius.

The problem is not that organizations are not looking. It is that most gateway configurations were never designed to produce detection signals — only access signals.

The three gaps that make external notification the norm

Gap 1: Request logs are too thin to detect anything.

Standard API gateway logs record endpoint, HTTP method, status code, and timestamp. That combination tells you that a request happened. It does not tell you whether the request was unusual compared to this client's historical behavior, whether the response payload contained data it normally would not, or whether the sequence of calls in the last ten minutes matches a known enumeration pattern.

A basic credential stuffing campaign against a REST API looks like a stream of 401s. An IDOR enumeration attack looks like a stream of 200s. Both are invisible in a log that captures only method, path, and status code. The data that would distinguish an attack from legitimate traffic — client identity, request parameters, call sequence correlation, response body characteristics — is never written.

Gap 2: No per-client identity means no anomaly baseline.

Anomaly detection requires a model of normal. Normal traffic volume, normal call sequences, normal parameter distributions — all per client, not per endpoint. Without per-client identity at the gateway, all traffic routes through shared service accounts or undifferentiated credential pools, and the baseline model averages behavior across all callers.

That means a partner whose traffic doubles because they started a new integration looks identical to an attacker who compromised that partner's API key and began enumerating records. The signal is the same. Only the identity-resolved baseline separates them.

Most gateway deployments that struggle with detection have this problem: the gateway routes traffic and enforces rate limits, but it does not produce a distinct audit record for each named client. You have one traffic stream, not the per-client view that detection requires.

Gap 3: Logs are written in a format SIEM rules cannot act on directly.

Even when gateway logs contain useful data, format matters. Unstructured text logs require custom parsing before an alert rule can be written against them. If your security team has to write a parser before they can write a detection rule, detection is delayed by weeks at minimum — and often never happens at all for the long tail of API endpoints that no one has prioritized.

Structured logging in a consistent JSON schema, with defined field names for client identity, latency, endpoint, and response status, lets you ship alert rules from day one. The absence of structured output is not a technical limitation. It is a design choice that downstream security tooling pays for indefinitely.

What external notification actually costs

The financial case for closing the detection gap is straightforward, but the operational case matters more.

When an organization learns about a breach from an external source, reconstruction begins from behind. The question every regulated industry requires an answer to — what exactly did this identity access, in what order, at which endpoints, with what parameters, over what time window — depends entirely on whether the gateway produced a record that answers it. Adding retrospective detail to logs that were never written is not possible.

IBM's research consistently shows that organizations with faster detection and containment cycles pay substantially less per breach. The dwell time — the gap between initial compromise and containment — is the primary cost multiplier. Every day the attacker is inside the system while internal monitoring sees nothing is another day of potential data access, lateral movement, and expanding blast radius.

CybelAngel's 2026 API Security Risks analysis identifies insufficient logging as the second-most-common driver of undiscovered breaches, after missing authentication. Not sophisticated evasion. Not zero-days. Logs that simply did not contain enough information to reconstruct what happened. After the incident, the records that would have enabled detection either do not exist or contain too little context to be useful.

For the 57% of organizations that Traceable's report found had experienced API breaches in the past two years, the detection architecture question is past tense: the audit record either existed or it did not. For the organizations that have not yet had an incident, it is a present-tense architectural decision.

What detection actually requires at the gateway layer

Closing the detection gap is less complex than most teams expect. It does not require a separate detection platform or a new vendor layer. It requires three architectural decisions about what the gateway produces:

Complete structured request logs. Every call, with client identity, request parameters, response status, latency, and a correlation timestamp — not a sample, not a high-watermark summary. In a format your SIEM can ingest without a custom parser. Zerq writes structured JSON logs for every request through the gateway, filterable by partner, product, and time range, retained in your own MongoDB instance — not routed to a vendor cloud. The records exist for compliance reconstruction queries in under a minute against indexed data.

Per-client traffic identity. Each caller — human application, service account, AI agent — provisioned as a named client with distinct credentials and its own traffic record. When something anomalous happens, the first question — which specific client did this, in this session, at these endpoints — is answerable immediately. Zerq's client and credential model ties every request to a specific provisioned identity. The gateway enforces the same per-client model for REST applications and AI agents connecting via the Gateway MCP, which matters as non-human traffic becomes the dominant API call volume.

Anomaly detection against per-client baselines. Volume deviations, unusual endpoint access patterns, latency spikes — flagged against individual client history, not global thresholds that average across all traffic. Zerq's observability layer includes AI-assisted anomaly detection that alerts when a client's traffic deviates from its established baseline. The distinction between a partner's legitimate traffic increase and an attacker exploiting a compromised credential requires a per-client view of what normal looks like.

None of these require a separate product. They require a gateway instrumented from the start to produce identity-resolved, structured, complete audit records — rather than access-oriented status logs that tell you requests happened but not who made them or what they retrieved.

The external notification problem is a governance symptom

Organizations that find out about API breaches from outsiders are not uniquely negligent. Most have functional security programs. The detection gap comes from a specific architectural pattern: gateways deployed to enforce access controls, but configured to produce the minimum log output required for debugging rather than the complete record required for detection.

Access control and detection are different design goals. A gateway that authenticates requests, enforces rate limits, and routes traffic correctly can do both — but only if detection is an explicit requirement from the start, not an afterthought added after the first incident makes the gap unmissable.

The record is either in your logs today, or it is not. The 38% found that out the hard way.


Zerq writes complete, structured audit records for every request — human, service account, or AI agent — retained in your own MongoDB instance without routing data to a vendor cloud. See how the observability layer works, explore the logging documentation to understand what a complete gateway audit record contains, or request a demo to review what your current logging architecture would produce in an incident reconstruction scenario.