Skip to main content

Enforcing open banking consent at the API gateway: a step-by-step workflow guide

Step-by-step guide to enforcing PSD2 consent at the API gateway with Zerq's workflow builder: real node config, full audit trail, no custom backend code.

  • open-banking
  • psd2
  • workflow-builder
  • compliance
  • banking
Zerq team

PSD2 open banking consent is a runtime check, not a configuration. A third-party provider receives a consent grant scoped to a specific account, specific permission set, and specific time window. That consent can be revoked by the customer at any moment, even after the TPP's access token has been issued and the gateway has verified the certificate. The open banking consent flow at the API gateway needs to reflect this: every request should verify that the consent backing it is still active, not just that the credentials are valid.

Most ASPSP implementations today do this check in backend code. The gateway handles mTLS mutual authentication, client identity, and token verification. The consent check happens inside the account-data service or a thin BFF layer in front of it. The gateway's access logs show authentication and rate-limit enforcement but not the consent decision. For a regulator asking whether a specific TPP had valid consent for a specific data access at a specific time, you are joining logs from two different systems to reconstruct the answer.

This post walks through building a consent validation workflow in Zerq that moves this check into the gateway path. The full consent decision appears in the gateway request log. The backend never receives requests with revoked or expired consent. The workflow is configured in the UI, with no code deployment required when consent logic changes.

Why existing approaches leave a gap

The standard pattern is a consent-check enrichment layer: a sidecar, a BFF service, or a Lambda authorizer that calls the consent registry before forwarding the request. Every variant of this pattern has the same structural problem: the consent decision lives outside the gateway's access control model.

Kong and AWS API Gateway can validate JWT claims and rate-limit by client, but calling an external service conditionally and branching on the result requires a custom plugin (Kong) or a Lambda authorizer (AWS). Both mean you are writing and deploying application code to handle what should be gateway policy. When consent logic needs to change (adding a scope check, handling expired consent differently, short-circuiting with a Redis cache), that is a code change and a deployment, not a configuration change.

Apigee supports server-side enrichment in its policy scripting model, but the proprietary runtime and cloud control plane make it unsuitable for regulated environments that require air-gapped deployment. MuleSoft has similar constraints. In both cases, consent enforcement lives in a scripting layer that is invisible to the gateway's audit trail: a blocked request appears as a failure in Apigee's analytics but not in the API access log your compliance team reads.

The deeper issue: when consent enforcement is application code, your security model has a moving part that no gateway control can inspect or enforce. The gateway trusts that the application is doing the right thing.

Open banking consent enforcement in the gateway path

Zerq's workflow builder runs inside the gateway request path. Attaching a workflow to a proxy replaces simple pass-through forwarding with a directed acyclic graph of nodes that execute before the response is returned to the caller. Workflow overhead is typically under 5ms per node. The consent check described below adds one external HTTP call and one condition evaluation — practically, 30-80ms additional latency on the allowed path, and that cost only applies once per request.

For a PSD2 account-data endpoint, the consent enforcement workflow looks like this:

http_trigger
  └─ n_validate_consent (http_request_node, calls consent registry)
       ├─ success → n_check_consent (condition_node)
       │               ├─ valid   → n_forward (proxy_node) → ASPSP backend
       │               ├─ denied  → n_deny (response_node, 403)
       │               └─ default → n_unknown (response_node, 503)
       └─ error   → n_svc_error (response_node, 503)

The gateway validates the TPP's mTLS certificate and client identity before the workflow runs. By the time http_trigger fires, the client is authenticated. The workflow then decides whether consent is still active.

Building the workflow: step-by-step

  1. Go to Collections, open your open banking collection, and click into the transactions proxy.
  2. Click Edit Workflow. The builder opens with the default http_trigger → response_node.
  3. Delete the default edge between http_trigger and response_node.
  4. Click Add node in the toolbar, search for http_request_node, and place it on the canvas. Set its ID to n_validate_consent.
  5. Connect http_trigger's default output handle to n_validate_consent's input.
  6. Configure n_validate_consent to call your consent registry:
{
  "id": "n_validate_consent",
  "type": "http_request_node",
  "config": {
    "timeout_ms": 2000,
    "retry_config": { "max_attempts": 1, "backoff_ms": 0 }
  },
  "inputs": {
    "url": "https://consent.bank.internal/v1/validate",
    "method": "POST",
    "headers": { "Content-Type": ["application/json"] },
    "body": {
      "type": "json",
      "content": {
        "accountId": "{{ $json['http_trigger'].request.path_params.accountId }}",
        "tppClientId": "{{ $json['http_trigger'].request.headers['x-client-id'] }}",
        "scope": "{{ $json['http_trigger'].request.query.scope }}"
      }
    }
  }
}

The expression $json['http_trigger'].request.headers['x-client-id'] reads the gateway-injected client ID header: this is the authenticated TPP identity, not a claim the TPP asserts itself. The consent registry receives a verified identity, not a self-reported one.

  1. Add a condition_node (ID: n_check_consent). Connect n_validate_consent's success output handle to it.
  2. Configure the condition checks:
{
  "id": "n_check_consent",
  "type": "condition_node",
  "config": {
    "conditions": [
      {
        "condition": "$json['n_validate_consent'].response.body.status === 'active'",
        "output": "valid"
      },
      {
        "condition": "$json['n_validate_consent'].response.body.status === 'revoked' || $json['n_validate_consent'].response.body.status === 'expired'",
        "output": "denied"
      }
    ],
    "default_output": "unknown"
  }
}

Note that conditions use plain JavaScript, not template syntax. The output string on each condition becomes the named edge leaving this node. Wire your downstream nodes to the matching output handle.

  1. Add a proxy_node (ID: n_forward). Connect n_check_consent's valid output to it. The proxy node uses the upstream target configured on the proxy (no URL needed here).

  2. Add a response_node (ID: n_deny) for the denied output:

{
  "id": "n_deny",
  "type": "response_node",
  "config": {
    "status": 403,
    "headers": { "Content-Type": ["application/json"] },
    "body": {
      "error": "consent_revoked",
      "message": "Customer consent for this data access has been revoked or has expired.",
      "consentId": "{{ $json['n_validate_consent'].response.body.consentId }}"
    }
  }
}

Including consentId in the response body lets the TPP reference the specific consent grant when raising a support issue, without exposing any account data.

  1. Add a response_node (status 503) for both the unknown branch from n_check_consent and the error branch from n_validate_consent. These handle the case where the consent registry is unreachable or returns an unexpected response.

  2. Click Validate in the toolbar. Zerq checks for disconnected nodes, missing required fields, invalid expression syntax, and missing entry triggers.

  3. Click Enable Workflow. The proxy now enforces consent on every request.

What the request log shows

Every request through this proxy produces a log entry in the Zerq observability layer. For a request where consent has been revoked:

{
  "requestId": "req_9fXd3k",
  "timestamp": "2026-04-23T09:14:22.113Z",
  "method": "GET",
  "path": "/accounts/GB29NWBK60161331926819/transactions",
  "status": 403,
  "latency": 47,
  "clientId": "acme-fintech-tpp",
  "profileId": "acme-prod-mtls",
  "collection": "psd2-account-data",
  "clientIp": "203.0.113.42",
  "responseBody": "{\"error\":\"consent_revoked\",\"consentId\":\"cns_7a3b1c\"}"
}

For a request where consent is active and the backend responds:

{
  "requestId": "req_7bKm1p",
  "timestamp": "2026-04-23T09:14:23.891Z",
  "method": "GET",
  "path": "/accounts/GB29NWBK60161331926819/transactions",
  "status": 200,
  "latency": 118,
  "clientId": "acme-fintech-tpp",
  "profileId": "acme-prod-mtls",
  "collection": "psd2-account-data",
  "clientIp": "203.0.113.42",
  "responseBody": "{\"transactions\": [...]}"
}

Three things stand out here. First, clientId and profileId identify the authenticated TPP (not a header the TPP sends) because the gateway resolves these from the mTLS session. Second, collection scopes the log entry to the open banking API product, making it easy to filter for all PSD2 traffic. Third, the latency difference: 47ms for a denied request (consent check only, no backend round-trip) versus 118ms for an allowed one (consent check plus backend). Your backend infrastructure never receives requests where consent is invalid.

A compliance team member with the Auditor role can filter these logs by clientId, collection, status code 403, or date range to produce a full revocation-enforcement report without accessing gateway configuration.

Updating consent logic without a deployment

Consent rules change. Regulations tighten. You might want to cache consent responses in Redis to reduce latency from 50ms to under 5ms per check. Or add a secondary check for a new permission scope. Or route differently based on the TPP's regulatory jurisdiction.

Because the workflow lives in the gateway configuration, your platform team makes these changes in the workflow builder UI without touching backend code. To add a Redis cache layer in front of the HTTP consent check:

  1. Add a redis_node (ID: n_cache_check) between http_trigger and n_validate_consent
  2. Set the lookup key to {{ $json['http_trigger'].request.headers['x-client-id'] + ':' + $json['http_trigger'].request.path_params.accountId }}
  3. Add a condition_node after the cache check to read the hit/miss result
  4. On cache miss, route through n_validate_consent as before, then write the result back to Redis
  5. On cache hit, route directly to n_check_consent using the cached status

The change takes effect when you save and re-enable the workflow. It is also captured in the audit log: who modified the workflow, the timestamp, and the full before/after configuration. That audit entry satisfies any regulator question about when consent-check behaviour changed.

Passing verified consent context to the backend

The set_node lets you inject information into the forwarded request. After n_check_consent confirms consent is active and before n_forward sends the request upstream, add a set_node that writes a header:

{
  "id": "n_enrich",
  "type": "set_node",
  "config": {
    "assignments": {
      "consentId": "{{ $json['n_validate_consent'].response.body.consentId }}",
      "consentScope": "{{ $json['n_validate_consent'].response.body.scope }}"
    }
  }
}

Then in n_forward's proxy config, add these as request headers before forwarding. The backend receives X-Consent-ID and X-Consent-Scope on every request that passed the gateway consent check, without the backend needing to call the consent registry again. Consent data is resolved once, at the gateway boundary, and the verified result flows downstream.

What this looks like in practice

A mid-tier retail bank with twelve active TPP integrations. Before moving consent enforcement to the gateway, the account-data service owned the consent check. That service logged a 403 when consent was invalid, but the gateway's access log showed the request as reaching the upstream with a 403 response from the backend, not distinguishable in the gateway log from any other 4xx business response. Reconstructing a full TPP access history for a regulatory inquiry meant joining gateway logs with the account-data service logs, because the consent decision was invisible at the gateway boundary.

After building the consent enforcement workflow, revoked-consent requests are rejected in the gateway before touching the backend. The 403 in the request log now carries "error": "consent_revoked" in the response body, making it machine-filterable. The account-data service no longer handles revoked-consent cases at all. It can assume any request it receives has valid, active consent, because the gateway verified it. The banking open banking team reduced backend noise, simplified the backend's authorization model, and consolidated the compliance audit trail to a single log source.

The platform team also removed the separate enrichment service that had been calling the consent registry. That service (with its own deployment pipeline, its own SLA, and its own logs) is now replaced by a workflow node in the gateway configuration. One fewer moving part in the PSD2 compliance stack.

Closing

Moving the open banking consent flow to the API gateway means the gateway is the single point of record for both authentication and authorization. Every consent decision appears in the request log with the authenticated client identity, the consent status, and the account path, in one place, with one retention policy, accessible to one compliance role. The backend is simplified: it receives only pre-validated requests. The platform team owns consent logic as gateway configuration, not backend code. When consent rules change, the workflow editor is the deployment.


Zerq is an enterprise API gateway built for regulated industries — one platform for API management, AI agent access, compliance audit, and developer portal, running entirely in your own infrastructure. See how it works or request a demo to walk through your specific requirements.