OpenTelemetry
The opentelemetry plugin instruments APISIX and sends traces to OpenTelemetry collector based on the OpenTelemetry specification, in binary-encoded OTLP over HTTP.
Examples
The examples below demonstrate how you can work with the opentelemetry plugin for different scenarios.
Enable opentelemetry Plugin
If you are using API7 Enterprise, you may skip this section as there is no need to manually enable the plugin.
By default, the opentelemetry plugin is disabled in APISIX. To enable, add the plugin to your configuration file as such:
plugins:
- ...
- opentelemetry
Reload APISIX for changes to take effect.
If you are using the Ingress Controller with Helm, update the plugin list in the Helm values file and run helm upgrade. For more information, see the Helm chart values reference.
Send Traces to OpenTelemetry
The following example demonstrates how to trace requests to a route and send traces to OpenTelemetry.
Start an OpenTelemetry collector instance:
- Docker
- Kubernetes
docker run -d --name otel-collector -p 4318:4318 otel/opentelemetry-collector-contrib
apiVersion: v1
kind: ConfigMap
metadata:
namespace: aic
name: otel-collector-config
data:
config.yaml: |
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aic
name: otel-collector
spec:
replicas: 1
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib
args:
- "--config=/conf/config.yaml"
ports:
- containerPort: 4318
volumeMounts:
- name: config
mountPath: /conf
volumes:
- name: config
configMap:
name: otel-collector-config
---
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: otel-collector
spec:
selector:
app: otel-collector
ports:
- name: otlp-http
port: 4318
targetPort: 4318
type: ClusterIP
Apply the manifest:
kubectl apply -f otel-collector.yaml
The collector should start listening on 127.0.0.1:4318 (Docker) or otel-collector.aic.svc.cluster.local:4318 (Kubernetes). Configure the plugin metadata to set the collector address:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/opentelemetry" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"collector": {
"address": "127.0.0.1:4318"
}
}'
plugin_metadata:
- name: opentelemetry
collector:
address: "127.0.0.1:4318"
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
Update the pluginMetadata field in your existing GatewayProxy resource:
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
namespace: aic
name: apisix-config
spec:
provider:
type: ControlPlane
controlPlane:
# your control plane connection configuration
# ....
pluginMetadata:
opentelemetry:
collector:
address: "otel-collector.aic.svc.cluster.local:4318"
Apply the configuration to your cluster:
kubectl apply -f gateway-proxy.yaml
Create a route with opentelemetry plugin:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "otel-tracing-route",
"uri": "/anything",
"plugins": {
"opentelemetry": {
"sampler": {
"name": "always_on"
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org": 1
}
}
}'
services:
- name: httpbin
routes:
- uris:
- /anything
name: otel-tracing-route
plugins:
opentelemetry:
sampler:
name: always_on
upstream:
type: roundrobin
nodes:
- host: httpbin.org
port: 80
weight: 1
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
- Gateway API
- APISIX CRD
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: httpbin-external-domain
spec:
type: ExternalName
externalName: httpbin.org
---
apiVersion: apisix.apache.org/v1alpha1
kind: PluginConfig
metadata:
namespace: aic
name: otel-plugin-config
spec:
plugins:
- name: opentelemetry
config:
sampler:
name: always_on
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
namespace: aic
name: otel-route
spec:
parentRefs:
- name: apisix
rules:
- matches:
- path:
type: PathPrefix
value: /anything
filters:
- type: ExtensionRef
extensionRef:
group: apisix.apache.org
kind: PluginConfig
name: otel-plugin-config
backendRefs:
- name: httpbin-external-domain
port: 80
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
namespace: aic
name: httpbin-external-domain
spec:
ingressClassName: apisix
externalNodes:
- type: Domain
name: httpbin.org
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: aic
name: otel-route
spec:
ingressClassName: apisix
http:
- name: otel-route
match:
paths:
- /anything
upstreams:
- name: httpbin-external-domain
plugins:
- name: opentelemetry
enable: true
config:
sampler:
name: always_on
Apply the configuration to your cluster:
kubectl apply -f otel-ic.yaml
Send a request to the route:
curl "http://127.0.0.1:9080/anything"
You should receive an HTTP/1.1 200 OK response.
In OpenTelemetry collector's log, you should see information similar to the following:
2024-02-18T17:14:03.825Z info ResourceSpans #0
Resource SchemaURL:
Resource attributes:
-> telemetry.sdk.language: Str(lua)
-> telemetry.sdk.name: Str(opentelemetry-lua)
-> telemetry.sdk.version: Str(0.1.1)
-> hostname: Str(e34673e24631)
-> service.name: Str(APISIX)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope opentelemetry-lua
Span #0
Trace ID : fbd0a38d4ea4a128ff1a688197bc58b0
Parent ID :
ID : af3dc7642104748a
Name : GET /anything
Kind : Server
Start time : 2024-02-18 17:14:03.763244032 +0000 UTC
End time : 2024-02-18 17:14:03.920229888 +0000 UTC
Status code : Unset
Status message :
Attributes:
-> net.host.name: Str(127.0.0.1)
-> http.method: Str(GET)
-> http.scheme: Str(http)
-> http.target: Str(/anything)
-> http.user_agent: Str(curl/7.64.1)
-> apisix.route_id: Str(otel-tracing-route)
-> apisix.route_name: Empty()
-> apisix.response_source: Str(upstream)
-> http.route: Str(/anything)
-> http.status_code: Int(200)
{"kind": "exporter", "data_type": "traces", "name": "debug"}
To visualize these traces, you can export your telemetry to backend services, such as Zipkin and Prometheus. See exporters for more details.
In API7 Enterprise (from version 3.9.10), each request span includes an apisix.response_source attribute that classifies the origin of the HTTP response:
apisix— the response was generated by APISIX itself, such as a plugin rejection, authentication failure, or route-not-found error.nginx— the response was generated by the NGINX proxy layer, such as a connection refused or upstream timeout error.upstream— the response came from the actual upstream service.
This attribute enables more precise error attribution in trace analysis — for example, distinguishing gateway-side rejections from real upstream errors. This attribute is not available in APISIX yet.
Using Trace Variables in Logging
The following example demonstrates how to configure the opentelemetry plugin to set the following built-in variables, which can be used in logger plugins or access logs:
opentelemetry_context_traceparent: trace parent IDopentelemetry_trace_id: trace ID of the current spanopentelemetry_span_id: span ID of the current span
Configure the plugin metadata to set set_ngx_var as true:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/opentelemetry" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"set_ngx_var": true
}'
plugin_metadata:
- name: opentelemetry
set_ngx_var: true
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
Update the pluginMetadata field in your existing GatewayProxy resource and keep the collector configuration:
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
namespace: aic
name: apisix-config
spec:
provider:
type: ControlPlane
controlPlane:
# your control plane connection configuration
# ....
pluginMetadata:
opentelemetry:
collector:
address: "otel-collector.aic.svc.cluster.local:4318"
set_ngx_var: true
Apply the configuration to your cluster:
kubectl apply -f gateway-proxy.yaml
Update the access log format in configuration file to use the opentelemetry plugin variables as such:
nginx_config:
http:
enable_access_log: true
access_log_format: '{"time": "$time_iso8601","opentelemetry_context_traceparent": "$opentelemetry_context_traceparent","opentelemetry_trace_id": "$opentelemetry_trace_id","opentelemetry_span_id": "$opentelemetry_span_id","remote_addr": "$remote_addr"}'
access_log_format_escape: json
❶ access_log_format: customize the access log format to use the opentelemetry plugin variables.
Reload APISIX for configuration changes to take effect.
If you are using the Ingress Controller with Helm, update the access log format in the Helm values file and run helm upgrade. For more information, see the Helm chart values reference.
You should see access log entries similar to the following when you generate requests:
{"time": "18/Feb/2024:15:09:00 +0000","opentelemetry_context_traceparent": "00-fbd0a38d4ea4a128ff1a688197bc58b0-8f4b9d9970a02629-01","opentelemetry_trace_id": "fbd0a38d4ea4a128ff1a688197bc58b0","opentelemetry_span_id": "af3dc7642104748a","remote_addr": "172.10.0.1"}