Configure Readiness and Liveness Probes
API7 Gateway data plane (DP) nodes can expose built-in health endpoints on the status port, 7085 by default. You can use these endpoints in Kubernetes probes, load balancer health checks, or container-level checks to ensure traffic is sent only to healthy nodes.
Use this guide when you need to:
- configure Kubernetes readiness and liveness probes
- add health checks in non-Helm Kubernetes manifests
- configure Docker or external load balancer checks for non-Kubernetes deployments
Health Check Endpoints
When the data plane status block is enabled, API7 Gateway provides health check endpoints on the status port (default 7085).
| Type | Path | Purpose |
|---|---|---|
| Liveness | /status | Indicates if the process has successfully started and is running. |
| Readiness | /status/ready | Indicates if the gateway is ready to proxy traffic (e.g., DP Manager connection is available). |
By default, /status returns 200 when the gateway process is up, while /status/ready can return 503 if the data plane cannot reach the configured control plane endpoint. This makes /status/ready the better choice for deciding whether a node should receive traffic.
The status endpoints are controlled by the data plane status configuration:
apisix:
status:
ip: 0.0.0.0
port: 7085
If you bind the status endpoint to 127.0.0.1, local container checks still work, but external load balancers will not be able to reach it.
Configure Probes in Kubernetes
Helm values
If you are deploying API7 Gateway via Helm, you can configure these probes in your values.yaml file.
apisix:
livenessProbe:
httpGet:
path: /status
port: 7085
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /status/ready
port: 7085
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
Use /status for liveness and /status/ready for readiness unless you have a specific reason to use the same endpoint for both.
Non-Helm Kubernetes manifests
If you deploy API7 Gateway with raw Deployment manifests instead of Helm, configure the probes directly on the gateway container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api7-gateway
spec:
template:
spec:
containers:
- name: gateway
image: api7/api7-ee-3-gateway:latest
ports:
- containerPort: 9080
- containerPort: 9443
- containerPort: 7085
livenessProbe:
httpGet:
path: /status
port: 7085
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /status/ready
port: 7085
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
If you expose the gateway through a Kubernetes Service, do not send client traffic to port 7085; keep it for health checks and operational access only.
Configure Checks Outside Kubernetes
Docker or container runtime checks
If you run API7 Gateway directly with Docker, you can verify health from the host or from inside the gateway container by querying the status endpoints:
curl -i http://127.0.0.1:7085/status
curl -i http://127.0.0.1:7085/status/ready
If port 7085 is not published to the host, expose it first so the health endpoints are reachable from outside the container.
For Docker Compose or other container runtimes, you can add a health check similar to the following:
services:
gateway:
image: api7/api7-ee-3-gateway:${GATEWAY_VERSION}
volumes:
- ./gateway_conf/config.yaml:/usr/local/apisix/conf/config.yaml:ro
ports:
- "9080:9080"
- "9443:9443"
- "7085:7085"
healthcheck:
test: ["CMD-SHELL", "wget -q -O- http://127.0.0.1:7085/status/ready >/dev/null 2>&1 || exit 1"]
interval: 10s
timeout: 5s
retries: 3
start_period: 10s
This pattern is useful for local development, Docker Compose, or VM-based deployments where an orchestrator needs a simple container health signal.
External load balancer checks
For VM, bare-metal, or Docker deployments behind a load balancer, configure health checks against the DP status API:
- use
http://<dp-host>:7085/statuswhen you only need to verify that the process is running - use
http://<dp-host>:7085/status/readywhen you want to stop routing traffic to nodes that have lost control plane connectivity
This is usually the better choice for traffic-serving load balancers because readiness reflects whether the node can still proxy with the expected configuration state.
TCP Socket Checks
As an alternative to HTTP checks, you can use TCP socket checks on the proxy port (default 9080 for HTTP or 9443 for HTTPS). This is useful if you want to ensure the network stack is responding even if the status endpoint is unavailable.
readinessProbe:
tcpSocket:
port: 9080
initialDelaySeconds: 5
periodSeconds: 10
Expected Responses
- Success: A
200 OKresponse indicates the instance is healthy. - Readiness failure: A
503 Service Unavailableresponse from/status/readyindicates that none of the configured DP Manager (etcd.host) endpoints are available. - Failure: Any other status code, connection refusal, or timeout indicates a health check failure.
Recommended Defaults
- Use
/statusfor liveness checks. - Use
/status/readyfor readiness checks and traffic-serving load balancers. - Keep the status port reachable only by your orchestrator, load balancer, or operations network.
- If you run outside Kubernetes, make sure your
config.yamlexposes the status port on an address your health checker can reach.
Next Steps
- Data Plane High Availability to design load balancer failover.
- Scale Data Plane to add more traffic-serving nodes.
- Monitor Gateway Metrics to track health trends over time.
- Configure Centralized Logging to debug probe failures.