Skip to main content

Version: latest

Deploy on Kubernetes

This guide walks you through deploying API7 Enterprise on Kubernetes using Helm charts. The deployment consists of two main components:

  • Control Plane (CP): Dashboard, DP Manager, and PostgreSQL database
  • Data Plane (DP): API7 Gateway instances that handle API traffic

Architecture Overview

Prerequisites

Before you begin, ensure you have:

  • A Kubernetes cluster running version 1.25 or later, such as EKS, GKE, AKS, or a self-managed cluster.
  • kubectl version 1.25 or later, configured to access your cluster.
  • Helm version 3.10 or later. See Install Helm.
  • PostgreSQL version 12 or later. An external database is recommended for production.
  • An API7 Enterprise license. See Get a trial license.
Storage Classes

By default, PostgreSQL and Prometheus enable persistent storage. If your cluster does not have a StorageClass configured, you will see PVC binding errors. Either configure a StorageClass or disable persistence (not recommended for production).

Step 1: Add the API7 Helm Repository

helm repo add api7 https://charts.api7.ai
helm repo update

Verify the repository was added:

helm search repo api7/

Expected output includes api7/api7ee3 (Control Plane) and api7/gateway (Data Plane).

Step 2: Create a Namespace

Create a dedicated namespace for all API7 components:

kubectl create namespace api7

Step 3: Provision a PostgreSQL Database

For production deployments, use an external PostgreSQL database (e.g., Amazon RDS, Cloud SQL, Azure Database for PostgreSQL) for durability and high availability.

If you are evaluating API7 or running in a non-production environment, the Helm chart can deploy a built-in PostgreSQL instance. In that case, skip this step and set postgresql.builtin: true in your CP values file (Step 4).

For external PostgreSQL, create the database and user:

CREATE DATABASE api7ee;
CREATE USER api7ee WITH ENCRYPTED PASSWORD 'YOUR_DB_PASSWORD';
GRANT ALL PRIVILEGES ON DATABASE api7ee TO api7ee;

Step 4: Configure the Control Plane

Create a cp-values.yaml file:

cp-values.yaml
# -- Disable the built-in PostgreSQL chart and point the dashboard to your external database.
postgresql:
builtin: false

dashboard_configuration:
database:
dsn: "postgres://api7ee:YOUR_DB_PASSWORD@YOUR_PG_HOST:5432/api7ee"

dp_manager_configuration:
database:
dsn: "postgres://api7ee:YOUR_DB_PASSWORD@YOUR_PG_HOST:5432/api7ee"

dashboard_service:
type: ClusterIP
  1. Disable the built-in PostgreSQL chart when you use an external database.
  2. Replace YOUR_PG_HOST and YOUR_DB_PASSWORD with your PostgreSQL connection details for both the Dashboard and DP Manager. For AWS RDS, use the instance endpoint, for example api7ee.xxxx.us-east-1.rds.amazonaws.com.
  3. For initial setup, ClusterIP is recommended. Use kubectl port-forward to access the dashboard. Switch to LoadBalancer or use an Ingress controller for permanent access.
Quick Evaluation Setup

For a quick evaluation with the built-in PostgreSQL (not for production):

cp-values-eval.yaml
postgresql:
builtin: true
primary:
persistence:
enabled: true
size: 10Gi

dashboard_service:
type: ClusterIP
  1. Enable the built-in PostgreSQL chart for evaluation environments.
  2. Keep the dashboard service as ClusterIP and access it with kubectl port-forward.

Step 5: Install the Control Plane

helm install api7ee3 api7/api7ee3 \
-f cp-values.yaml \
-n api7

Both the Helm release name and the chart name used here are api7ee3, matching the chart's own Chart.yaml. You can pick any release name you prefer, but if you change it, adjust the Helm selectors in the verification commands below accordingly.

Step 6: Verify Control Plane Pods

Wait for all CP pods to reach the Running state:

kubectl get pods -n api7 -w

Expected output:

NAME                                    READY   STATUS    RESTARTS   AGE
api7ee3-dashboard-xxxxx-yyyyy 1/1 Running 0 2m
api7ee3-dp-manager-xxxxx-yyyyy 1/1 Running 0 2m

If using the built-in PostgreSQL, you should also see:

api7-postgresql-0                       1/1     Running   0          2m

Step 7: Verify Control Plane Services

kubectl get svc -n api7 -l app.kubernetes.io/name=api7ee3 -o wide

Expected output:

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
api7ee3-dashboard ClusterIP 10.100.25.236 <none> 7080/TCP,7443/TCP 3m
api7ee3-developer-portal ClusterIP 10.100.88.101 <none> 4321/TCP 3m
api7ee3-dp-manager ClusterIP 10.100.239.32 <none> 7900/TCP,7943/TCP 3m

Key services:

ServicePortPurpose
api7ee3-dashboard7443Dashboard UI and Admin API
api7ee3-dashboard7080Dashboard HTTP endpoint
api7ee3-developer-portal4321Developer portal service
api7ee3-dp-manager7900DP Manager HTTP endpoint
api7ee3-dp-manager7943DP Manager mTLS endpoint (gateway connection)

Step 8: Access the Dashboard

Forward the dashboard port to your local machine:

kubectl -n api7 port-forward svc/api7ee3-dashboard 7443:7443

Open https://localhost:7443 in your browser. Log in with the default credentials (admin / admin). On first login, the dashboard prompts you to reset the password. After that, the dashboard opens the Activate License screen, where you can upload and activate your license.

Step 9: Configure the DP Manager Address

In the API7 Dashboard, navigate to Gateway Settings and set DP Manager Address to the internal Kubernetes service DNS:

https://api7ee3-dp-manager:7943

This is the address that data plane instances will use to connect to the control plane.

Step 10: Add a Gateway Instance (Generate DP Certificates)

In the API7 Dashboard, select a gateway group such as default, navigate to Gateway Instances, and click Add Gateway Instance. Choose Kubernetes, fill in the target namespace, and generate the deployment script. The dashboard will generate:

  1. A TLS certificate (tls.crt) for the data plane
  2. A TLS private key (tls.key)
  3. A CA certificate (ca.crt) for verifying the control plane
  4. A kubectl create secret command for the generated certificates
  5. A complete Helm install command with all necessary parameters

Copy the generated script. It contains the certificates, the secret creation command, and the exact Helm command to install the data plane.

Step 11: Create the mTLS Secret

Extract the certificates from the generated script and create a Kubernetes secret:

# Save the certificates from the generated script to files,
# then create the secret:
kubectl create secret generic api7-ee-3-gateway-tls \
--from-file=tls.crt=/tmp/tls.crt \
--from-file=tls.key=/tmp/tls.key \
--from-file=ca.crt=/tmp/ca.crt \
-n api7
caution

The certificates are unique per gateway group. If you have multiple gateway groups, each group needs its own set of certificates and a separate Kubernetes secret.

Step 12: Install the Data Plane

Use the Helm command generated by the dashboard, or put the dashboard-generated settings into a dp-values.yaml file and install the data plane with Helm.

dp-values.yaml
etcd:
auth:
tls:
enabled: true
existingSecret: api7-ee-3-gateway-tls
certFilename: tls.crt
certKeyFilename: tls.key
verify: true
host:
- https://api7ee3-dp-manager:7943

gateway:
tls:
existingCASecret: api7-ee-3-gateway-tls
certCAFilename: ca.crt

apisix:
extraEnvVars:
- name: API7_GATEWAY_GROUP_SHORT_ID
value: default
replicaCount: 2
image:
repository: api7/api7-ee-3-gateway
tag: ${GATEWAY_VERSION}

❶ Point the data plane to the DP Manager mTLS service inside the cluster.

❷ Replace default with your gateway group's short ID if you are not using the default group.

❸ Set the replica count and ${GATEWAY_VERSION} based on your availability requirements and the control-plane-compatible gateway image version. Always pin an explicit version in production rather than using latest.

helm upgrade --install api7-ee-3-gateway api7/gateway \
-f dp-values.yaml \
-n api7

Step 13: Verify Data Plane Pods

kubectl get pods -n api7 -l app.kubernetes.io/name=gateway -w

Expected output:

NAME                                       READY   STATUS    RESTARTS   AGE
api7-ee-3-gateway-xxxxx-yyyyy 1/1 Running 0 1m
api7-ee-3-gateway-xxxxx-zzzzz 1/1 Running 0 1m

Step 14: Verify Data Plane Services

kubectl get svc -n api7 -l app.kubernetes.io/name=gateway

Expected output:

NAME                          TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
api7-ee-3-gateway-gateway NodePort 10.100.50.100 <none> 80:31080/TCP,443:31443/TCP 1m

Step 15: Verify Connection in Dashboard

Log in to the API7 Dashboard and navigate to Gateway Groups. The data plane instances should appear as Healthy.

If the instances do not appear:

  1. Check the DP pod logs: kubectl logs -n api7 -l app.kubernetes.io/name=gateway --tail=50
  2. Verify the mTLS secret exists: kubectl get secret api7-ee-3-gateway-tls -n api7
  3. Confirm the CP address is reachable from the DP pods: kubectl exec -n api7 <gateway-pod> -- curl -k "https://api7ee3-dp-manager:7943/apisix/admin/routes"

Step 16: Expose the Gateway

Choose one of the following methods to expose the gateway for external traffic:

Option A: NodePort Service (Default)

The gateway Helm chart creates a NodePort service by default. You can reach it through a node IP, or port-forward the service for local verification.

# Port-forward the default service for local access
kubectl port-forward -n api7 svc/api7-ee-3-gateway-gateway 9080:80

Option B: LoadBalancer Service

On cloud providers, switch the service to LoadBalancer to provision an external address:

helm upgrade api7-ee-3-gateway api7/gateway \
--set "gateway.type=LoadBalancer" \
-n api7 --reuse-values

After the service is updated, check the external address:

kubectl get svc -n api7 api7-ee-3-gateway-gateway \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Option C: Ingress Controller

If you already have an Ingress controller (e.g., NGINX Ingress, AWS ALB Controller), create an Ingress resource pointing to the gateway service.

Step 17: Test the Deployment

Send a test request to the gateway:

curl -i "http://127.0.0.1:9080/"

If no routes are configured, you should receive a 404 response from API7 Gateway, confirming the gateway is running and accepting traffic.


Cloud-Specific Guidance

AWS EKS

Load Balancer Configuration

Use the AWS Load Balancer Controller for Network Load Balancer (NLB) support:

Additional values for EKS
apisix:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"

Node Group Recommendations

For performance-sensitive deployments, use dedicated node groups:

Node GroupInstance TypePurpose
api7-cpm5.xlarge (4 vCPU, 16 GB)Control plane components
api7-dpc5.2xlarge (8 vCPU, 16 GB)Gateway data plane

Use node selectors or node affinity to schedule components on the appropriate nodes:

DP values with node selector
apisix:
nodeSelector:
nodeName: api7-dp

EKS-Specific Prerequisites

  1. Install the AWS Load Balancer Controller if using NLB.
  2. Create an EBS CSI driver add-on for PostgreSQL persistent storage.
  3. Configure kubectl to access your EKS cluster:
aws eks update-kubeconfig --region <region-code> --name <cluster-name>

GCP GKE

Load Balancer Configuration

GKE automatically provisions a TCP load balancer for LoadBalancer services. For HTTP(S) load balancing with Google Cloud Armor:

Additional values for GKE
apisix:
service:
annotations:
cloud.google.com/l4-rls: "enabled"
networking.gke.io/load-balancer-type: "Internal" # For internal-only access

Firewall Rules

Ensure the following firewall rules allow traffic to your GKE nodes:

PortProtocolSourcePurpose
9080TCPClient CIDRGateway HTTP traffic
9443TCPClient CIDRGateway HTTPS traffic
7443TCPAdmin CIDRDashboard access
7943TCPCluster internalCP ↔ DP mTLS communication

GKE-Specific Prerequisites

  1. Enable the GKE Workload Identity if using Cloud SQL for PostgreSQL.
  2. Create a Cloud SQL PostgreSQL instance and configure a private IP for your VPC.

Azure AKS

Load Balancer Configuration

AKS uses the Standard SKU load balancer by default. For internal access:

Additional values for AKS
apisix:
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"

AKS-Specific Prerequisites

  1. Use the Azure Disk CSI driver (enabled by default on AKS 1.21+) for PostgreSQL persistent storage.
  2. For external PostgreSQL, provision an Azure Database for PostgreSQL Flexible Server.
  3. Use Azure CNI for predictable pod IP allocation in production.

Production Hardening

Resource Requests and Limits

Always set resource requests and limits for production workloads:

DP values with resource limits
apisix:
resources:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "4"
memory: "8Gi"

Worker Processes

Configure the number of NGINX worker processes to match the number of CPU cores allocated to the gateway pods:

--set "nginx.workerProcesses=auto"

Or set a specific number:

--set "nginx.workerProcesses=4"

Pod Disruption Budgets

Create a PodDisruptionBudget (PDB) to ensure gateway availability during node maintenance:

pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: api7-gateway-pdb
namespace: api7
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: gateway
kubectl apply -f pdb.yaml

Horizontal Pod Autoscaler

Enable auto-scaling based on CPU utilization:

hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api7-gateway-hpa
namespace: api7
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api7-ee-3-gateway
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
kubectl apply -f hpa.yaml

Anti-Affinity Rules

Spread gateway pods across nodes for fault tolerance:

DP values with anti-affinity
apisix:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- gateway
topologyKey: kubernetes.io/hostname

Troubleshooting

Pods Stuck in Pending State

Symptom: CP or DP pods remain in Pending state.

Cause: Insufficient cluster resources or missing StorageClass for PVCs.

Resolution:

# Check pod events
kubectl describe pod <pod-name> -n api7

# If PVC-related, check storage classes
kubectl get storageclass

Data Plane Cannot Connect to Control Plane

Symptom: DP pods are Running but show connection refused or certificate verify failed in logs.

Resolution:

  1. Verify the mTLS secret exists and contains all three files:
kubectl get secret api7-ee-3-gateway-tls -n api7 -o jsonpath='{.data}' | jq 'keys'
# Should show: ["ca.crt", "tls.crt", "tls.key"]
  1. Verify the CP address is resolvable from the DP pod:
kubectl exec -n api7 <gateway-pod> -- nslookup api7ee3-dp-manager
  1. Check the DP pod logs for specific errors:
kubectl logs -n api7 <gateway-pod> --tail=100

Dashboard Not Accessible

Symptom: Cannot reach the dashboard after port-forward or LoadBalancer setup.

Resolution:

  1. Verify the dashboard pod is running: kubectl get pods -n api7 -l app.kubernetes.io/component=dashboard
  2. Check the service endpoint: kubectl get endpoints -n api7 api7ee3-dashboard
  3. Ensure port 7443 is not blocked by network policies or security groups.

Next Steps

API7.ai Logo

The digital world is connected by APIs,
API7.ai exists to make APIs more efficient, reliable, and secure.

Sign up for API7 newsletter

Product

API7 Gateway

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN PTE. LTD 2019 – 2026. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the Apache Software Foundation