Skip to main content

Idempotent payment APIs at the gateway layer: duplicate request protection with Zerq workflows

Implement idempotent payment APIs at the gateway layer using Zerq's workflow builder — Redis-backed duplicate detection, no backend code changes needed.

  • fintech
  • payments
  • workflow-builder
  • redis
  • api-management
  • compliance
Zerq team

Payment systems fail in specific, expensive ways. A mobile client submits a payment request. The network times out before the response arrives. The client retries. The payment backend processes the same transfer twice. The customer is charged twice for the same transaction.

This scenario (the double charge, the duplicate order, the repeated settlement) is one of the most common escalations in fintech and B2B payments. The standard fix is to implement idempotency keys at the backend service: a unique key passed in the request header, stored in a database, and checked on every incoming request. When one team implements this consistently across one service, the pattern works. When six services across four teams implement it independently, the result is four different database schemas, three different TTL policies, two different error response shapes, and one recurring incident when a new engineer inherits code that does not match their mental model.

An idempotent payment API gateway layer changes where the enforcement lives. The Zerq workflow builder lets you attach a processing pipeline to any API proxy: one that checks Redis for a prior response before the request reaches any backend service, returns the cached result if found, and proceeds as normal if not. One implementation. Consistent duplicate detection across every payment endpoint. Visible in the same audit trail as every other API event.

Why a separate middleware layer does not fix this

The two gateway approaches teams most often reach for address adjacent problems, not this one. Kong's request-termination plugin can block or modify requests based on headers, but it has no mechanism to capture the upstream response from a successful call and replay that exact response when the same key appears on a retry. AWS API Gateway's response caching operates on a time window per endpoint: all requests to a route within the cache TTL receive the same response, which is the inverse of idempotency. Each unique key needs its own isolated cached response. Apigee's caching behavior is similar, built for performance, not for per-request replay.

Teams that recognize this gap typically implement it as a dedicated middleware service, a Lambda authorizer that writes to DynamoDB, or a shared library that every backend imports. The result is custom code running in the critical request path with its own deployment cycle, its own failure modes, and no visibility from the gateway's request logs. When an incident occurs (a duplicate charge, a missed retry), the investigation spans multiple systems without a single source of truth.

MuleSoft's DataWeave can model stateful patterns, but connecting them to a shared Redis instance across multiple API flows requires the same bespoke integration work, plus the Anypoint Platform's overhead for every configuration change. The problem is not that the tools don't exist; it is that no common gateway platform makes it easy to attach stateful, key-based caching to a specific API endpoint as a first-class gateway behavior.

The idempotent payment API gateway workflow in Zerq

Zerq's workflow builder attaches a processing pipeline to any proxy. Every request to that proxy executes the workflow, which can read from Redis, call backends, apply conditions, and return responses, all in the gateway's request path, with every step logged.

An idempotency workflow uses five node types:

  • redis_node (GET): looks up the idempotency key in Redis
  • condition_node: branches on whether a prior cached response was found
  • proxy_node: forwards new requests to the payment backend
  • redis_node (SET): stores the backend response after a successful upstream call
  • response_node: returns either the cached or the fresh response to the client

What the request path looks like

For a new request with no prior cached response:

Client sends: POST /payments/transfer
  Header: Idempotency-Key: 7f3a8e2b-1c4d-4f5e-9b2a-3d7e8f1a2b3c

→ Zerq gateway receives request
→ redis_node GET: key "payment:idempotency:7f3a8e2b-..." → null (cache miss)
→ condition_node: result is null → route to "new_request" branch
→ proxy_node: forwards to https://payments.internal/v2/transfer
→ Backend processes payment, returns 200 with transaction ID
→ redis_node SET: stores response body at "payment:idempotency:7f3a8e2b-..."
    with TTL: 86400 seconds (24 hours)
→ response_node: returns 200 with transaction body to client
   Latency: ~187ms (backend round-trip included)

For a retry with the same idempotency key:

Client resends: POST /payments/transfer (network retry after timeout)
  Header: Idempotency-Key: 7f3a8e2b-1c4d-4f5e-9b2a-3d7e8f1a2b3c (same key)

→ Zerq gateway receives request
→ redis_node GET: key "payment:idempotency:7f3a8e2b-..." → cached response body (hit)
→ condition_node: result is not null → route to "duplicate" branch
→ response_node: returns cached 200 body with X-Idempotency-Replay: true
   Latency: ~4ms (backend never called)

The payment backend receives one request and processes one payment. The client receives two responses with identical bodies. No double charge.

Building the idempotency workflow: step by step

Open the proxy for your payment collection in the management UI and click Edit Workflow. Select custom workflow to start from the default http_trigger -> response_node baseline.

Step 1: Add the Redis key lookup

Add a redis_node after the http_trigger. Configure it with a GET operation and a namespaced key built from the incoming Idempotency-Key header:

{
  "id": "check_idempotency",
  "type": "redis_node",
  "config": {
    "operation": "get",
    "key": "{{ 'payment:idempotency:' + $json['http_trigger'].request.headers['idempotency-key'] }}"
  }
}

Connect the http_trigger default edge to check_idempotency. Wire both the success and error branches of check_idempotency forward to the next node. A Redis connectivity failure should fail safe by treating the request as new rather than blocking payment traffic.

Step 2: Add the condition branch

Add a condition_node to determine whether a cached response exists for this key:

{
  "id": "is_duplicate",
  "type": "condition_node",
  "config": {
    "conditions": [
      {
        "condition": "$json['check_idempotency'].result !== null",
        "output": "duplicate"
      }
    ],
    "default_output": "new_request"
  }
}

Connect both the success and error branches of check_idempotency to is_duplicate. The error branch wires to is_duplicate as the new_request case: a Redis failure routes the request onward to the backend.

Step 3: Return the cached response for duplicate requests

Add a response_node wired from is_duplicate.duplicate:

{
  "id": "return_cached",
  "type": "response_node",
  "config": {
    "status": 200,
    "headers": {
      "X-Idempotency-Replay": "true",
      "Content-Type": "application/json"
    },
    "body": "{{ $json['check_idempotency'].result }}"
  }
}

The X-Idempotency-Replay: true header signals to the client that this response came from cache. This header is also visible in the request log entry, making replays distinguishable from original requests during a compliance review.

Step 4: Handle new requests and store the result

Add a proxy_node wired from is_duplicate.new_request. Give it the ID call_payment. Wire call_payment.success to a redis_node SET operation that caches the upstream response:

{
  "id": "store_result",
  "type": "redis_node",
  "config": {
    "operation": "set",
    "key": "{{ 'payment:idempotency:' + $json['http_trigger'].request.headers['idempotency-key'] }}",
    "value": "{{ JSON.stringify($json['call_payment'].response.body) }}",
    "ttl_seconds": 86400
  }
}

After the SET, add a response_node to return the backend's actual response to the client:

{
  "id": "return_fresh",
  "type": "response_node",
  "config": {
    "status": "{{ $json['call_payment'].response.status_code }}",
    "body": "{{ $json['call_payment'].response.body }}"
  }
}

Step 5: Enable and validate

Toggle Workflow Enabled on the proxy. All subsequent requests to that proxy immediately execute the workflow. Validate with three requests:

  1. POST /payments/transfer with a unique Idempotency-Key header; expect 200 with transaction body, no X-Idempotency-Replay header, latency in the 100-300ms range
  2. POST the identical request within 24 hours; expect 200 with the same body, X-Idempotency-Replay: true, latency under 10ms
  3. POST without an Idempotency-Key header; the key becomes payment:idempotency:undefined and all keyless requests share the same cache entry. Add a condition_node before the Redis GET to validate the header is present and return 400 Bad Request if absent.

The Zerq workflow builder's Listen for request test mode lets you send a request directly into the workflow and inspect each node's output before enabling the workflow for live traffic. Use it to verify that the redis_node is returning the expected value on the success branch before proceeding.

What the request logs show

Both the original request and the retry appear in Zerq's request logs. The latency difference identifies the duplicate:

// Original request: backend was called
{
  "requestId": "req_01hz4ab3c5f8g0j2k4m6n8p0",
  "timestamp": "2026-04-20T11:14:23.421Z",
  "method": "POST",
  "path": "/payments/transfer",
  "targetEndpoint": "https://payments.internal/v2/transfer",
  "statusCode": 200,
  "latency": 187,
  "clientId": "client-uuid-partner-a",
  "profileId": "profile-uuid-partner-a-prod",
  "collection": "Payment Initiation Service",
  "clientIp": "203.0.113.14"
}

// Retry: served from Redis cache, backend not called
{
  "requestId": "req_01hz4ab3c5f8g0j2k4m6n8p1",
  "timestamp": "2026-04-20T11:14:29.887Z",
  "method": "POST",
  "path": "/payments/transfer",
  "targetEndpoint": "",
  "statusCode": 200,
  "latency": 4,
  "clientId": "client-uuid-partner-a",
  "profileId": "profile-uuid-partner-a-prod",
  "collection": "Payment Initiation Service",
  "clientIp": "203.0.113.14"
}

Both entries carry the same clientId, collection, and path. Filter by clientId in the Logs view to produce a per-partner payment record for any date range, originals and replays alike, without querying multiple systems. The empty targetEndpoint on the retry confirms the backend was not reached.

This matters for financial reconciliation. When a partner claims a payment was processed twice, the request log tells you immediately: one entry with a backend round-trip, one entry with a 4ms gateway-only response. No investigation across multiple log systems required.

What this looks like in practice

A B2B payments platform processing inter-company transfers had idempotency implemented across four services: a Node.js payments API, a Go reconciliation service, a Java settlement API, and a Python webhook dispatcher. Each had a slightly different implementation: different Redis key namespaces, different TTLs (24 hours, 48 hours, 7 days, no expiry), and different error response shapes when a duplicate was detected.

During a network incident that caused widespread client retries, the reconciliation team spent three days determining which transactions were genuine originals and which were duplicates, because the detection logic was inconsistent and the logs were spread across four separate systems.

After moving idempotency enforcement into a Zerq workflow:

  • All duplicate detection runs at the gateway before any backend receives the request
  • Redis credentials are configured once in Zerq's credentials store and referenced by the workflow, so no service team needs to manage a Redis connection string independently
  • Zerq's access control model (client, profile, collection) ensures the idempotency workflow only runs for authorized partners; unauthorized requests are rejected before reaching the workflow
  • Request logs for every payment call, original and retry, are in one place, filterable by client ID, time range, status code, and latency
  • The compliance team uses the Auditor role to review request logs and config changes without admin access to the platform

The four backend services removed their idempotency tables and the associated schema migration overhead. The reconciliation incident has not recurred.

A decision framework: gateway vs backend idempotency

Gateway-level idempotency handles the network retry case. Backend idempotency handles the business logic case. Use this framework to determine where each belongs in your architecture:

Consider gateway idempotency when:

  • You need consistent duplicate detection across multiple backend services without touching any of their code
  • Your payment clients are external partners who expect a standard Idempotency-Key header contract across all your endpoints
  • Your current per-service implementation has TTL, schema, or error-response inconsistencies you need to standardize
  • You want duplicate detection visible in the same audit trail as access control and rate limiting events
  • You want to reduce backend load from client retries during network instability

Consider backend idempotency when:

  • Your idempotency must span multiple endpoints that form a single business transaction (debit and credit must both succeed or both fail as a unit)
  • You need domain-specific rules to determine what constitutes a duplicate: same amount and same beneficiary and same account, not just the same header key
  • Your backend already has a mature, well-tested implementation with a low incident rate

For high-volume payment platforms, the two levels complement each other. Gateway idempotency eliminates duplicate backend calls caused by network conditions. Backend idempotency protects business invariants that the gateway cannot reason about. The Zerq architecture page covers Redis deployment, credential management, and multi-replica gateway configuration for production payment environments.

Closing

Zerq's workflow builder gives payment and fintech teams a way to enforce idempotency at the API gateway layer without changing backend code. The workflow runs on every request to a proxy, reads from and writes to Redis, and logs both original requests and replays in the same structured request log, with client identity, collection, and latency on every entry. Combined with Zerq's per-partner access controls and rate limits, the idempotency layer applies only to authorized clients making authenticated requests to their assigned payment collections. One implementation. Consistent behavior. One place to investigate when something goes wrong.


Zerq is an enterprise API gateway built for regulated industries — one platform for API management, AI agent access, compliance audit, and developer portal, running entirely in your own infrastructure. See how it works or request a demo to walk through your specific requirements.