Log with Elasticsearch
Elasticsearch is a popular JSON-based datastore for storing and indexing large volumes of data. It is often used to store logs from various sources and works with tools like Logstash and Kibana to form an entire observability stack known as the Elastic (ELK) Stack.
APISIX supports forwarding its logs directly to Elasticsearch through the elasticsearch-logger
plugin. These logs can then be searched, filtered, and visualized through Kibana to gather insights to manage applications.
This guide will show you how to enable the elasticsearch-logger
plugin to integrate APISIX with the ELK stack for observability.
Prerequisite(s)
- Install Docker.
- Install cURL to send requests to the services for validation.
- Follow the Getting Started tutorial to start a new APISIX instance in Docker.
Start Elasticsearch and Kibana
APISIX currently supports Elasticsearch versions up to and including 7.x. This guide uses version 7.17.1 for both Elasticsearch and Kibana.
- Docker
- Kubernetes
Start an Elasticsearch instance in Docker:
docker run -d \
--name elasticsearch \
--network apisix-quickstart-net \
-v elasticsearch_vol:/usr/share/elasticsearch/data/ \
-p 9200:9200 \
-p 9300:9300 \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-e discovery.type=single-node \
-e xpack.security.enabled=false \
docker.elastic.co/elasticsearch/elasticsearch:7.17.1
Start a Kibana instance in Docker to visualize the indexed data in Elasticsearch:
docker run -d \
--name kibana \
--network apisix-quickstart-net \
-p 5601:5601 \
-e ELASTICSEARCH_HOSTS="http://elasticsearch:9200" \
docker.elastic.co/kibana/kibana:7.17.1
Create a Kubernetes manifest file for Elasticsearch:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: discovery.type
value: "single-node"
- name: xpack.security.enabled
value: "false"
volumeMounts:
- name: elasticsearch-storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-storage
persistentVolumeClaim:
claimName: elasticsearch-data
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- name: http
port: 9200
targetPort: 9200
- name: transport
port: 9300
targetPort: 9300
type: ClusterIP
Create another Kubernetes manifest file for Kibana:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.17.1
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
---
apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
selector:
app: kibana
ports:
- name: http
port: 5601
targetPort: 5601
type: ClusterIP
Apply the configuration to your cluster:
kubectl apply -f elasticsearch.yaml -f kibana.yaml
Port forward the Kibana dashboard's service port to your local machine's port:
kubectl port-forward svc/kibana 5601:5601 &
If successful, you should see the Kibana web dashboard on localhost:5601.
Enable elasticsearch-logger
Plugin
Enable elasticsearch-logger
globally and create a sample route to generate logs. Alternatively, you can enable the plugin on a route.
- Admin API
- ADC
- Ingress Controller
Enable the elasticsearch-logger
plugin on all routes:
curl "http://127.0.0.1:9180/apisix/admin/global_rules/" -X PUT -d '
{
"id": "elasticsearch",
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway",
"type": "logs"
},
"ssl_verify": false,
"timeout": 60,
"retry_delay": 1,
"buffer_duration": 60,
"max_retry_count": 0,
"batch_max_size": 5,
"inactive_timeout": 5
}
}
}'
Create a sample route on which you will collect logs:
curl -i "http://127.0.0.1:9180/apisix/admin/routes" -X PUT -d '
{
"id": "getting-started-ip",
"uri": "/ip",
"upstream": {
"nodes": {
"httpbin.org:80":1
},
"type": "roundrobin"
}
}'
Enable the elasticsearch-logger
plugin globally and create a sample route on which you will collect logs:
global_rules:
elasticsearch-logger:
endpoint_addr: "http://elasticsearch:9200"
field:
index: "gateway"
type: "logs"
ssl_verify: false
timeout: 60
retry_delay: 1
buffer_duration: 60
max_retry_count: 0
batch_max_size: 5
inactive_timeout: 5
services:
- name: httpbin Service
routes:
- uris:
- /ip
name: getting-started-ip
upstream:
type: roundrobin
nodes:
- host: httpbin.org
port: 80
weight: 1
Synchronize the configurations to APISIX:
adc sync -f adc-elasticsearch.yaml
- Gateway API
- APISIX CRD
Create a Kubernetes manifest file to enable elasticsearch-logger
globally:
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
name: apisix-config
spec:
plugins:
- name: elasticsearch-logger
enabled: true
config:
endpoint_addr: "http://elasticsearch:9200"
field:
index: "gateway"
type: "logs"
ssl_verify: false
timeout: 60
retry_delay: 1
buffer_duration: 60
max_retry_count: 0
batch_max_size: 5
inactive_timeout: 5
Create another Kubernetes manifest file for a sample route on which you will collect logs:
apiVersion: v1
kind: Service
metadata:
name: httpbin-external-domain
spec:
type: ExternalName
externalName: httpbin.org
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: getting-started-ip
spec:
parentRefs:
- name: apisix
rules:
- matches:
- path:
type: Exact
value: /ip
backendRefs:
- name: httpbin-external-domain
port: 80
Create a Kubernetes manifest file to enable elasticsearch-logger
globally:
apiVersion: apisix.apache.org/v2
kind: ApisixGlobalRule
metadata:
name: global-elasticsearch
spec:
ingressClassName: apisix
plugins:
- name: elasticsearch-logger
enable: true
config:
endpoint_addr: "http://elasticsearch:9200"
field:
index: "gateway"
type: "logs"
ssl_verify: false
timeout: 60
retry_delay: 1
buffer_duration: 60
max_retry_count: 0
batch_max_size: 5
inactive_timeout: 5
Create another Kubernetes manifest file for a sample route on which you will collect logs:
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
name: httpbin-external-domain
spec:
externalNodes:
- type: Domain
name: httpbin.org
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: getting-started-ip
spec:
ingressClassName: apisix
http:
- name: getting-started-ip
match:
paths:
- /ip
upstreams:
- name: httpbin-external-domain
Apply the configuration to your cluster:
kubectl apply -f global-elasticsearch.yaml -f httpbin-route.yaml
Customize Log Format
As an optional step, you can customize the log format for elasticsearch-logger
. The log format of most APISIX logging plugins could be customized locally on the plugin (e.g. bound to a route) and/or globally with plugin metadata.
Add host address, timestamp, and client IP address to the logs with built-in variables:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger" -X PUT -d '
{
"log_format":{
"host":"$host",
"timestamp":"$time_iso8601",
"client_ip":"$remote_addr"
}
}'
plugin_metadata:
elasticsearch-logger:
log_format:
host: $host
client_ip: $remote_addr
timestamp: $time_iso8601
Synchronize the configurations to APISIX:
adc sync -f adc-plugin-metadata -f adc-elasticsearch.yaml
- Gateway API
- APISIX CRD
Create a Kubernetes manifest file to configure the plugin metadata for the elasticsearch-logger
plugin:
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
name: apisix-config
spec:
pluginMetadata:
opentelemetry: {
"log_format":{
"host":"$host",
"timestamp":"$time_iso8601",
"client_ip":"$remote_addr"
}
}
Apply the configuration to your cluster:
kubectl apply -f otel-plugin-metadata.yaml
APISIX CRDs do not currently support plugin metadata.
Configure Kibana
Send some requests to the route to generate access log entries:
for i in {1..10}; do
curl -i "http://127.0.0.1:9080/ip"
done
Open Kibana dashboard on localhost:5601 and click the Discover tab from the menu. Create a new index pattern to fetch the data from Elasticsearch:

Create a pattern gateway
to match the indexed data in Elasticsearch:

If your configuration is correct, you can go back to the Discover tab and view the logs from APISIX:

Next Steps
See elasticsearch-logger
plugin reference to learn more about the plugin configuration options.