Skip to main content

The Zerq workflow builder: a complete guide to visual API orchestration

How to use the Zerq API gateway workflow builder: node types, expressions, templates, and step-by-step instructions for fan-out, routing, and transformation.

  • workflows
  • api-management
  • how-to
  • platform-engineering
Zerq team

Every platform team eventually hits the same wall. A new API requirement arrives: aggregate three microservices into one response, validate the request body before it reaches the backend, route traffic to a new service for a subset of partners. The answer is always the same: write more code, deploy another service, add it to the maintenance backlog.

That code accumulates. Over time it becomes a layer of custom middleware that nobody fully understands, that breaks when backends change, and that takes a sprint to modify. The API gateway exists precisely to absorb this kind of logic, but most gateways make it harder than it needs to be: Kong plugins written in Lua, Apigee policies in XML, AWS Lambda authorizers that are just serverless functions with extra steps. None of these feel like the visual API gateway workflow builder that platform engineers actually want.

Zerq's workflow builder is a visual, node-based canvas attached directly to a proxy. It runs inside the gateway path with no external service, no queue, no additional deployment, and it handles the full range of API orchestration patterns: fan-out aggregation, conditional routing, request validation, response transformation, database enrichment, and error handling. This guide walks through how it works, what the nodes do, and how to use it.

How the workflow builder fits into Zerq

In Zerq, every API you expose is a proxy inside a collection. By default, a proxy is a simple pass-through: request arrives, gateway authenticates and rate-limits it, request forwards to the upstream, response comes back. That covers most cases.

When you need more, you attach a workflow to the proxy. From that point on, every request runs through the workflow graph instead of simple forwarding. The workflow has full access to the request context: method, path, path parameters, query string, headers, body, and the responses from any upstream services it calls.

To open the builder:

  1. Go to Collections and open the relevant collection.
  2. Click into the proxy you want to add logic to.
  3. Click Edit Workflow.

The canvas opens with the default starting graph: http_trigger → response_node. You build from there.

The canvas and node model

The workflow is a directed acyclic graph (DAG) of connected nodes. Each node has an input handle on the left and one or more output handles on the right. You connect nodes by dragging from an output handle to the input handle of the next node.

Output handles can be:

  • default: the normal execution path for nodes that don't branch
  • success / error: used by nodes that make outbound calls or run code
  • valid / invalid: used by the validate_node
  • Custom labels: used by condition_node and switch_node for named branches

Nodes that use named outputs are called branching nodes. The engine routes execution to the output whose label matches the node's result. Nodes without named outputs follow all outgoing edges unconditionally. The parallel_node is the exception: it executes all outgoing edges concurrently.

Node categories

Zerq ships 28 node types across eight categories:

CategoryNodes
Triggershttp_trigger, manual_trigger, cron_trigger, kafka_consumer, imap_trigger
HTTP and proxyproxy_node, http_request_node, response_node
Data processingcode_node, set_node, xml_node
Logic and routingcondition_node, validate_node, switch_node, loop_node
Flow controldelay_node, parallel_node, merge_node, redirect_node, stop_node
Securityjwt_node
Databasemongodb_node, postgres_node, sqlserver_node, oracle_node, redis_node
Messagingsmtp_node, sendgrid_node, kafka_node

For API request flows, the entry node is always http_trigger. Every execution path must end at response_node (to return a response) or redirect_node (to issue a redirect). stop_node terminates a branch without sending an HTTP response, which is useful for fire-and-forget side effects like sending a notification after the response has already been sent.

Expressions and data flow

Every node's output is available to all downstream nodes through the $json context object. The syntax is:

$json['node_id'].field.subfield

The http_trigger node always has this structure:

$json['http_trigger'].request.method          // "GET", "POST", etc.
$json['http_trigger'].request.path            // "/accounts/123/summary"
$json['http_trigger'].request.path_params.id  // path parameter {id}
$json['http_trigger'].request.query.limit     // query string ?limit=10
$json['http_trigger'].request.headers['authorization']
$json['http_trigger'].request.body            // parsed request body

For an http_request_node with ID fetch_orders:

$json['fetch_orders'].response.status_code    // 200
$json['fetch_orders'].response.body           // parsed JSON body
$json['fetch_orders'].response.body.items     // nested field

Expressions use double curly brace syntax and support full ES6+ JavaScript:

// URL with path parameter
https://internal.example.com/orders/{{ $json['http_trigger'].request.path_params.orderId }}

// Conditional default
{{ $json['http_trigger'].request.query.limit ? parseInt($json['http_trigger'].request.query.limit) : 20 }}

// Filter upstream array
{{ $json['get_orders'].response.body.items.filter(o => o.status === 'pending').length }}

// Compose JSON for response
{{ JSON.stringify({
  user: $json['get_user'].response.body,
  balance: $json['get_balance'].response.body.amount
}) }}

Node IDs matter. Use descriptive IDs like get_user, fetch_orders, check_balance rather than auto-generated ones. Click a node to rename its ID before building expressions that reference it.

Four prebuilt templates

Click Templates in the builder toolbar to access four starting patterns. Each pre-populates the canvas with a working graph you can modify.

Orchestration — fan out to multiple backends and merge results:

http_trigger → [http_request_node A, http_request_node B] → merge_node → response_node

Transformation: reshape request and response between client and upstream:

http_trigger → proxy_node → set_node → response_node

Resiliency: retry and fallback on upstream failure:

http_trigger → http_request_node → condition_node → [success path, fallback path] → response_node

Canary routing: split traffic between stable and new backend:

http_trigger → condition_node → [new backend, stable backend] → response_node

Start with the template closest to your use case, then customize node configuration.

Step-by-step: build a multi-backend fan-out workflow

This pattern is common in platform engineering: a client requests an account summary, and the response must aggregate data from three separate microservices. Assembling this in application code means writing and maintaining a backend-for-frontend service. In Zerq it is a workflow on the gateway proxy.

Goal: GET /accounts/{accountId}/summary returns a merged response containing account details, recent transactions, and credit score from three separate internal services.

  1. Open Collections, select your collection, open the account-summary proxy, click Edit Workflow.
  2. Click Templates, select Orchestration. The canvas opens with a two-branch fan-out template.
  3. Rename the first http_request_node to get_account. Set its URL to:
    https://accounts.internal/v1/accounts/{{ $json['http_trigger'].request.path_params.accountId }}
    
    Set method to GET, timeout to 3000 ms.
  4. Add a second http_request_node, rename it get_transactions, URL:
    https://ledger.internal/v1/accounts/{{ $json['http_trigger'].request.path_params.accountId }}/transactions?limit=10
    
  5. Add a third http_request_node, rename it get_credit, URL:
    https://credit.internal/v1/scores/{{ $json['http_trigger'].request.path_params.accountId }}
    
  6. Add a parallel_node before the three request nodes. Connect http_trigger to parallel_node. Connect parallel_node to get_account, get_transactions, and get_credit.
  7. Add a merge_node. Connect the success output of all three http_request_node nodes to merge_node.
  8. Add a set_node, rename it compose_response. Set one assignment:
    • Key: summary
    • Value (expression):
    {{ JSON.stringify({
      account: $json['get_account'].response.body,
      transactions: $json['get_transactions'].response.body.items,
      creditScore: $json['get_credit'].response.body.score
    }) }}
    
  9. Connect merge_node to compose_response, then compose_response to response_node. In response_node, set status 200 and body {{ $json['compose_response'].summary }}.
  10. Add an error path for each backend: connect the error output of each http_request_node to a response_node that returns status 502 and a structured error body.
  11. Click Save, then click Execute workflow in the toolbar. Set a test accountId path parameter and run. Inspect each node's output in the canvas.
  12. Once the test passes, toggle Enable Workflow to activate production traffic routing.

The three backend calls execute in parallel through parallel_node. merge_node waits for all three to complete before passing control downstream. Typical added latency is under 5ms per node, and the total latency is bounded by the slowest of the three backends, not their sum.

Step-by-step: add request validation with branched error handling

Before the fan-out reaches the backends, you want to validate that the accountId path parameter looks like a valid account identifier. Malformed requests should return 400 immediately, before any backend is called.

  1. Click Add node and select validate_node. Rename it validate_account_id.
  2. Connect http_trigger to validate_account_id, replacing the direct edge to parallel_node.
  3. In the validate_node config, set:
    {
      "inputs": {
        "value": "{{ $json['http_trigger'].request.path_params.accountId }}"
      },
      "config": {
        "schema": {
          "type": "string",
          "pattern": "^[a-zA-Z0-9\\-]{8,36}$"
        }
      }
    }
    
  4. Connect the valid output to parallel_node.
  5. Add a response_node for the invalid path. Connect the invalid output to this new response_node. Configure it:
    {
      "status": 400,
      "body": "{\"error\": \"invalid_account_id\", \"message\": \"accountId must be 8-36 alphanumeric characters\"}"
    }
    
  6. Click Save and Validate. The Validate button checks for disconnected nodes, missing required fields, and expression syntax errors.

With this in place, every malformed request is rejected at the gateway with a deterministic 400 response. The validation runs before any backend call is made.

Conditional routing with condition_node and switch_node

The condition_node evaluates conditions in order and routes to the first matching named output. Use it when the branching logic involves complex predicates:

{
  "id": "n_cond",
  "type": "condition_node",
  "config": {
    "conditions": [
      {
        "condition": "$json['http_trigger'].request.headers['x-partner-tier'] === 'premium'",
        "output": "premium_path"
      },
      {
        "condition": "$json['proxy_1'].response.status_code >= 500",
        "output": "upstream_error"
      }
    ],
    "default_output": "standard_path"
  }
}

Wire named output edges to the corresponding downstream nodes. Always wire the default_output: requests that match no condition fall through to it, and an unwired default silently drops the request.

The switch_node is the right tool when you are branching on a single value with multiple discrete cases:

{
  "id": "route_by_method",
  "type": "switch_node",
  "config": {
    "value_expression": "{{ $json['http_trigger'].request.method }}",
    "cases": [
      { "value": "GET",    "output": "read_path" },
      { "value": "POST",   "output": "write_path" },
      { "value": "DELETE", "output": "delete_path" }
    ],
    "default_output": "method_not_allowed"
  }
}

Wire method_not_allowed to a response_node returning status 405. The switch_node uses string comparison internally, so the case values must match exactly including letter casing.

Testing before you enable

Two test modes are available regardless of whether the workflow is enabled.

Execute workflow runs the full graph with inputs you provide. For http_trigger workflows, set the test method, path, headers, and body in the panel. After execution, each node on the canvas shows its output. Click any node to see what it received and returned. If execution fails, a Run failed chip appears in the toolbar. Click Details to read the full error message in an overlay panel.

Listen for Request (available from the http_trigger node's config panel) opens a temporary URL. Send a real HTTP request to it and the payload is captured, then the workflow runs automatically so you can inspect all node outputs in one pass.

Run at least these scenarios before enabling:

  1. A valid request with correct parameters: verify 200 and the expected response body.
  2. A malformed request: verify 400 from the validate_node invalid branch.
  3. A request where one backend returns 500: verify the error branch returns 502 and does not expose internal details.
  4. A request with an invalid auth token: verify 401 is returned by the gateway layer before the workflow runs.

Save, validate, and enable

Save persists the workflow definition. It does not activate production traffic. The Save button is disabled when there are no unsaved changes and becomes active as soon as you edit anything on the canvas.

Validate checks the workflow structure: exactly one entry trigger, no disconnected nodes, no missing required fields, no circular dependencies, valid expression syntax. Run Validate before every enable.

Enable Workflow activates the workflow on the proxy. For http_trigger workflows, all live traffic to the proxy starts running through the graph. For trigger-based workflows (cron_trigger, kafka_consumer, imap_trigger), enabling starts the production schedulers. Disabling a workflow pauses production execution but does not delete it: the proxy falls back to simple pass-through forwarding, and all test modes remain available.

What this looks like in practice

A financial data platform manages APIs for a set of B2B partners, each with different data entitlements. The core challenge: partners call a single /portfolio/summary endpoint, but the underlying data comes from four separate microservices: equity positions, bond positions, cash balances, and a risk score engine. The risk score engine is slower than the others and is not required for all partners.

Before Zerq, this aggregation lived in a Node.js BFF service. Every time a microservice changed its response shape, someone had to update the BFF, cut a release, and deploy. The BFF had no visibility into which partner triggered which request.

After migrating to Zerq, the team built a workflow on the portfolio-summary proxy. http_trigger flows into a validate_node that checks the portfolioId format. The valid branch reaches a condition_node that checks whether the partner's access profile includes risk scoring. Both branches fan out to the relevant backends via parallel_node, merge_node collects results, a set_node composes the final response conditional on which fields were populated, and response_node returns the merged payload.

The risk score engine branch is only triggered for partners whose profile has that endpoint in scope. Partners without risk scoring access never touch that backend. The gateway's audit trail records every request: which partner, which profile, which branches executed, which backends were called, and latency per node. When the equity positions service changed its response schema, the team updated the set_node expression in the workflow builder, saved, validated, and enabled. No application deployment required. Every change to the workflow is versioned in Zerq's config audit log alongside access control and credential changes, so compliance teams see the full history of who modified what and when.

Closing summary

The Zerq workflow builder gives platform teams a way to handle API orchestration, transformation, validation, and routing directly at the gateway layer, without writing and maintaining custom middleware. The 28 node types cover the patterns that come up repeatedly: fan-out and merge, conditional branching, request validation, response composition, database enrichment, and error handling. The expression system is standard JavaScript with access to every upstream node's output. The built-in test modes let you verify every branch before enabling. And because workflows run inside the gateway process, they inherit the full access control, rate limiting, and audit trail that applies to every other request, including per-node execution records for compliance teams.


Zerq is an enterprise API gateway built for regulated industries — one platform for API management, AI agent access, compliance audit, and developer portal, running entirely in your own infrastructure. See how it works or request a demo to walk through your specific requirements.