Deploy on OpenShift
This guide walks you through deploying API7 Gateway on a Red Hat OpenShift cluster. The instructions cover OpenShift-specific considerations such as Security Context Constraints (SCCs) and service account configuration, while following the same Helm-based deployment model as a standard Kubernetes deployment.
Architecture Overview
API7 Gateway uses the same two-component architecture on OpenShift as on any Kubernetes cluster:
- Control Plane (CP): Dashboard, DP Manager, and PostgreSQL database
- Data Plane (DP): API7 Gateway instances that handle API traffic
The key difference on OpenShift is that pods must comply with Security Context Constraints (SCCs), which restrict the actions a pod can perform.
Prerequisites
Before you begin, ensure you have:
- An OpenShift cluster running version
4.12or later, withcluster-adminaccess. - The OpenShift CLI (
oc) version4.12or later. See Install oc. - Helm version
3.10or later. See Install Helm. - An API7 Enterprise license. See Get a trial license.
Log in to the OpenShift Cluster
Log in using the oc CLI. You can find the login command from the OpenShift web console under your user menu:
oc login \
--token=sha256~YOUR_TOKEN \
--server=https://api.YOUR_CLUSTER.openshiftapps.com:6443
Verify you have cluster-admin access:
oc auth can-i create scc --all-namespaces
Step 1: Create a Project
Create a dedicated OpenShift project (namespace) for API7:
oc new-project api7
Alternatively, create the project from the OpenShift web console.
Step 2: Install the Control Plane
Add the API7 Helm Repository
helm repo add api7 https://charts.api7.ai
helm repo update
Configure Security Context for Built-in Components
The built-in PostgreSQL and Prometheus pods require writable filesystems. On OpenShift, you must disable the default securityContext settings in the Helm chart to allow OpenShift's SCC system to manage security contexts:
postgresql:
primary:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
prometheus:
server:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
For production deployments, use an external PostgreSQL database (e.g., Amazon RDS, Azure Database for PostgreSQL, or a dedicated PostgreSQL instance) instead of the built-in one. See the Kubernetes deployment guide for external database configuration.
Install the Control Plane Helm Chart
helm install api7ee3 api7/api7ee3 \
-f cp-values.yaml \
-n api7
Verify Control Plane Services
oc get svc -n api7 -l app.kubernetes.io/name=api7ee3 -o wide
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api7ee3-dashboard ClusterIP 172.30.39.137 <none> 7080/TCP,7443/TCP 2m
api7ee3-developer-portal ClusterIP 172.30.114.132 <none> 4321/TCP 2m
api7ee3-dp-manager ClusterIP 172.30.232.75 <none> 7900/TCP,7943/TCP 2m
Step 3: Activate the License
Port-forward the Dashboard service:
oc port-forward svc/api7ee3-dashboard -n api7 7443:7443
Open https://localhost:7443 in your browser. Log in with the default credentials (admin / admin), then upload your license.
Step 4: Configure the Control Plane Address
In the API7 Dashboard, navigate to Gateway Settings and set the Control Plane Address to:
https://api7ee3-dp-manager.api7.svc.cluster.local:7943
This is the address data plane instances will use to connect to the control plane via mTLS.
Step 5: Install the Data Plane
Configure SCC for API7 Gateway
API7 Gateway needs to write local files at runtime (e.g., nginx.conf, logs, cache files). The nonroot-v2 SCC provides the required permissions.
Create a service account for the gateway:
oc create serviceaccount api7-gateway -n api7
Create a role that grants access to the nonroot-v2 SCC:
oc create role api7-gateway-scc \
--verb=use \
--resource=scc \
--resource-name=nonroot-v2 \
-n api7
Bind the role to the service account:
oc create rolebinding api7-gateway-scc \
--role=api7-gateway-scc \
--serviceaccount=api7:api7-gateway \
-n api7
Generate the Deployment Script
In the API7 Dashboard, select a gateway group (e.g., default), navigate to Gateway Instances, and click Add Gateway Instance. Switch to the Kubernetes tab and click Generate to produce the deployment script with mTLS certificates.
The generated script includes:
- TLS certificates for CP-DP mutual authentication
- A Helm install command with all necessary parameters
Add OpenShift-Specific Helm Values
Append the following flags to the generated Helm command to ensure OpenShift compatibility:
--set "serviceAccount.name=api7-gateway" \
--set "apisix.securityContext.runAsNonRoot=true" \
--set "apisix.securityContext.runAsUser=636"
The complete Helm command should look similar to:
helm upgrade --install api7-ee-3-gateway api7/gateway \
--set "etcd.auth.tls.enabled=true" \
--set "etcd.auth.tls.existingSecret=api7-ee-3-gateway-tls" \
--set "etcd.auth.tls.certFilename=tls.crt" \
--set "etcd.auth.tls.certKeyFilename=tls.key" \
--set "etcd.auth.tls.verify=true" \
--set "gateway.tls.existingCASecret=api7-ee-3-gateway-tls" \
--set "gateway.tls.certCAFilename=ca.crt" \
--set "apisix.extraEnvVars[0].name=API7_GATEWAY_GROUP_SHORT_ID" \
--set "apisix.extraEnvVars[0].value=default" \
--set "etcd.host[0]=https://api7ee3-dp-manager.api7.svc.cluster.local:7943" \
--set "apisix.replicaCount=1" \
--set "apisix.image.repository=api7/api7-ee-3-gateway" \
--set "apisix.image.tag=3.2.14.4" \
--set "serviceAccount.name=api7-gateway" \
--set "apisix.securityContext.runAsNonRoot=true" \
--set "apisix.securityContext.runAsUser=636" \
-n api7
Key OpenShift-specific parameters:
| Parameter | Value | Purpose |
|---|---|---|
serviceAccount.name | api7-gateway | Uses the service account with nonroot-v2 SCC access |
apisix.securityContext.runAsNonRoot | true | Ensures the pod runs as a non-root user |
apisix.securityContext.runAsUser | 636 | Specifies the UID for the gateway process, matching SCC requirements |
Step 6: Verify the Installation
Check Gateway Pods
oc get pods -n api7 -l app.kubernetes.io/name=gateway
Expected output:
NAME READY STATUS RESTARTS AGE
api7-ee-3-gateway-xxxxx-yyyyy 1/1 Running 0 1m
Check Gateway Instances in the Dashboard
Navigate to the gateway group in the API7 Dashboard. The gateway instances should appear as Healthy.
Send a Test Request
Port-forward the gateway service and send a test request:
oc port-forward svc/api7-ee-3-gateway-gateway -n api7 9080:80
curl -i "http://127.0.0.1:9080/"
If no routes are configured, you should receive a 404 response from API7 Gateway, confirming it is running and accepting traffic.
Expose the Gateway
Using OpenShift Routes
Create an OpenShift Route to expose the gateway externally:
oc expose svc/api7-ee-3-gateway-gateway -n api7 --port=80
For HTTPS with edge termination:
oc create route edge api7-gateway \
--service=api7-ee-3-gateway-gateway \
--port=80 \
-n api7
Get the route URL:
oc get route api7-gateway -n api7 -o jsonpath='{.spec.host}'
Using a LoadBalancer Service
If your OpenShift cluster supports cloud load balancers:
oc patch svc api7-ee-3-gateway-gateway -n api7 \
-p '{"spec": {"type": "LoadBalancer"}}'
Troubleshooting
Pod Fails with SCC Violation
Symptom: Pod fails to start with an error referencing SecurityContextConstraint or forbidden.
Resolution: Verify the service account has the correct SCC binding:
oc get rolebinding api7-gateway-scc -n api7 -o yaml
Verify the SCC is applied to the pod:
oc describe pod <gateway-pod-name> -n api7 | grep -i scc
PostgreSQL or Prometheus Pods Fail to Start
Symptom: Built-in PostgreSQL or Prometheus pods fail with permission errors.
Resolution: Ensure podSecurityContext.enabled and containerSecurityContext.enabled are set to false in the Helm values as shown in Step 2.
Data Plane Cannot Connect to Control Plane
See the Kubernetes deployment troubleshooting guide for mTLS connectivity issues.
FAQ
Which SCC Should I Use?
The nonroot-v2 SCC is sufficient for API7 Gateway. It allows the pod to run as a non-root user (UID 636) while permitting filesystem writes needed for NGINX configuration and logs. You do not need the privileged or anyuid SCC.
Can I Deploy the Control Plane and Data Plane in Different Projects?
Yes. The CP-DP communication uses Kubernetes service DNS (https://{service-name}.{namespace}.svc.cluster.local:7943), which works across namespaces. Ensure the network policies in your OpenShift cluster allow traffic between the projects.
How Do I Connect to an External PostgreSQL?
Configure the database DSN in the Helm values file:
postgresql:
enabled: false
externalPostgresql:
host: "YOUR_PG_HOST"
port: 5432
user: "api7"
password: "YOUR_DB_PASSWORD"
database: "api7ee"
Next Steps
- Deploy for High Availability: Configure multi-replica deployments.
- Configure the Control Plane: Fine-tune control plane settings.
- Scale the Data Plane: Add more gateway instances.