elasticsearch-logger
The elasticsearch-logger plugin pushes request and response logs in batches to Elasticsearch and supports the customization of log formats. When enabled, the plugin will serialize the request context information to Elasticsearch Bulk format and add them to the queue, before they are pushed to Elasticsearch.
Examples
The examples below demonstrate how you can configure elasticsearch-logger plugin for different scenarios.
To follow along the examples, start Elasticsearch and Kibana:
- Docker
- Kubernetes
Start an Elasticsearch instance:
docker run -d \
--name elasticsearch \
--network apisix-quickstart-net \
-v elasticsearch_vol:/usr/share/elasticsearch/data/ \
-p 9200:9200 \
-p 9300:9300 \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-e discovery.type=single-node \
-e xpack.security.enabled=false \
docker.elastic.co/elasticsearch/elasticsearch:7.17.1
Start a Kibana instance to visualize the indexed data in Elasticsearch:
docker run -d \
--name kibana \
--network apisix-quickstart-net \
-p 5601:5601 \
-e ELASTICSEARCH_HOSTS="http://elasticsearch:9200" \
docker.elastic.co/kibana/kibana:7.17.1
Create a Kubernetes manifest file for the Elasticsearch deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aic
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
env:
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: discovery.type
value: single-node
- name: xpack.security.enabled
value: "false"
ports:
- containerPort: 9200
- containerPort: 9300
Create a Kubernetes manifest file for the Elasticsearch service:
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- name: http
port: 9200
targetPort: 9200
- name: transport
port: 9300
targetPort: 9300
type: ClusterIP
Create a Kubernetes manifest file for the Kibana deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aic
name: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.17.1
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch.aic.svc:9200"
ports:
- containerPort: 5601
Create a Kubernetes manifest file for the Kibana service:
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: kibana
spec:
selector:
app: kibana
ports:
- name: http
port: 5601
targetPort: 5601
type: ClusterIP
Apply the manifests:
kubectl apply -f elasticsearch-deployment.yaml -f elasticsearch-service.yaml -f kibana-deployment.yaml -f kibana-service.yaml
To access Kibana, forward the service port:
kubectl port-forward -n aic svc/kibana 5601:5601
If successful, you should see the Kibana dashboard on localhost:5601.
Log in the Default Log Format
The following example demonstrates how you can enable the elasticsearch-logger plugin on a route, which logs client requests and responses, as well as pushing logs to Elasticsearch.
Create a route with elasticsearch-logger as follows:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "elasticsearch-logger-route",
"uri": "/anything",
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway",
"type": "logs"
}
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
services:
- name: httpbin
routes:
- uris:
- /anything
name: elasticsearch-logger-route
plugins:
elasticsearch-logger:
endpoint_addrs:
- "http://elasticsearch:9200"
field:
index: gateway
type: logs
upstream:
type: roundrobin
nodes:
- host: httpbin.org
port: 80
weight: 1
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
- Gateway API
- APISIX CRD
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: httpbin-external-domain
spec:
type: ExternalName
externalName: httpbin.org
---
apiVersion: apisix.apache.org/v1alpha1
kind: PluginConfig
metadata:
namespace: aic
name: elasticsearch-logger-plugin-config
spec:
plugins:
- name: elasticsearch-logger
config:
endpoint_addrs:
- "http://elasticsearch.aic.svc:9200"
field:
index: gateway
type: logs
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
namespace: aic
name: elasticsearch-logger-route
spec:
parentRefs:
- name: apisix
rules:
- matches:
- path:
type: Exact
value: /anything
filters:
- type: ExtensionRef
extensionRef:
group: apisix.apache.org
kind: PluginConfig
name: elasticsearch-logger-plugin-config
backendRefs:
- name: httpbin-external-domain
port: 80
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
namespace: aic
name: httpbin-external-domain
spec:
ingressClassName: apisix
externalNodes:
- type: Domain
name: httpbin.org
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: aic
name: elasticsearch-logger-route
spec:
ingressClassName: apisix
http:
- name: elasticsearch-logger-route
match:
paths:
- /anything
methods:
- GET
upstreams:
- name: httpbin-external-domain
plugins:
- name: elasticsearch-logger
config:
endpoint_addrs:
- "http://elasticsearch.aic.svc:9200"
field:
index: gateway
type: logs
Apply the configuration:
kubectl apply -f elasticsearch-logger-ic.yaml
❶ Configure the endpoint address to Elasticsearch.
❷ Configure the index field as gateway.
❸ Configure the type field as logs.
Send a request to the route to generate a log entry:
curl -i "http://127.0.0.1:9080/anything"
You should receive an HTTP/1.1 200 OK response.
Navigate to the Kibana dashboard on localhost:5601 and under Discover tab, create a new index pattern gateway to fetch the data from Elasticsearch. Once configured, navigate back to the Discover tab and you should see a log generated, similar to the following:
{
"_index": "gateway",
"_type": "logs",
"_id": "CE-JL5QBOkdYRG7kEjTJ",
"_version": 1,
"_score": 1,
"_source": {
"request": {
"headers": {
"host": "127.0.0.1:9080",
"accept": "*/*",
"user-agent": "curl/8.6.0"
},
"size": 85,
"querystring": {},
"method": "GET",
"url": "http://127.0.0.1:9080/anything",
"uri": "/anything"
},
"response": {
"headers": {
"content-type": "application/json",
"access-control-allow-credentials": "true",
"server": "APISIX/3.13.0",
"content-length": "390",
"access-control-allow-origin": "*",
"connection": "close",
"date": "Mon, 13 Jan 2025 10:18:14 GMT"
},
"status": 200,
"size": 618
},
"route_id": "elasticsearch-logger-route",
"latency": 585.00003814697,
"apisix_latency": 18.000038146973,
"upstream_latency": 567,
"upstream": "50.19.58.113:80",
"server": {
"hostname": "0b9a772e68f8",
"version": "3.13.0"
},
"service_id": "",
"client_ip": "192.168.65.1"
},
"fields": {
...
}
}
Customize Log Format With Plugin Metadata
The following example demonstrates how you can customize log format using plugin metadata and built-in variables to log specific headers from request and response.
In APISIX, plugin metadata is used to configure the common metadata fields of all plugin instances of the same plugin. It is useful when a plugin is enabled across multiple resources and requires a universal update to their metadata fields.
First, create a route with elasticsearch-logger as follows:
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "elasticsearch-logger-route",
"uri": "/anything",
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway",
"type": "logs"
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
Next, configure the plugin metadata for elasticsearch-logger:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr",
"env": "$http_env",
"resp_content_type": "$sent_http_Content_Type"
}
}'
plugin_metadata:
- name: elasticsearch-logger
log_format:
host: "$host"
"@timestamp": "$time_iso8601"
client_ip: "$remote_addr"
env: "$http_env"
resp_content_type: "$sent_http_Content_Type"
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
namespace: aic
name: apisix-config
spec:
provider:
type: ControlPlane
controlPlane:
service:
name: apisix-admin
port: 9180
auth:
type: AdminKey
adminKey:
value: edd1c9f034335f136f87ad84b625c8f1
pluginMetadata:
elasticsearch-logger:
log_format:
host: "$host"
"@timestamp": "$time_iso8601"
client_ip: "$remote_addr"
env: "$http_env"
resp_content_type: "$sent_http_Content_Type"
Apply the configuration:
kubectl apply -f elasticsearch-logger-metadata.yaml
❶ log the custom request header env.
❷ log the response header Content-Type.
Send a request to the route with the env header:
curl -i "http://127.0.0.1:9080/anything" -H "env: dev"
You should receive an HTTP/1.1 200 OK response.
Navigate to the Kibana dashboard on localhost:5601 and under Discover tab, create a new index pattern gateway to fetch the data from Elasticsearch, if you have not done so already. Once configured, navigate back to the Discover tab and you should see a log generated, similar to the following:
{
"_index": "gateway",
"_type": "logs",
"_id": "Ck-WL5QBOkdYRG7kODS0",
"_version": 1,
"_score": 1,
"_source": {
"client_ip": "192.168.65.1",
"route_id": "elasticsearch-logger-route",
"@timestamp": "2025-01-06T10:32:36+00:00",
"host": "127.0.0.1",
"resp_content_type": "application/json"
},
"fields": {
...
}
}
Log Request Bodies Conditionally
The following example demonstrates how you can conditionally log request body.
Create a route with elasticsearch-logger as follows:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "gateway",
"type": "logs"
},
"include_req_body": true,
"include_req_body_expr": [["arg_log_body", "==", "yes"]]
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
},
"uri": "/anything",
"id": "elasticsearch-logger-route"
}'
services:
- name: httpbin
routes:
- uris:
- /anything
name: elasticsearch-logger-route
plugins:
elasticsearch-logger:
endpoint_addrs:
- "http://elasticsearch:9200"
field:
index: gateway
type: logs
include_req_body: true
include_req_body_expr:
- - arg_log_body
- "=="
- "yes"
upstream:
type: roundrobin
nodes:
- host: httpbin.org
port: 80
weight: 1
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
- Gateway API
- APISIX CRD
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: httpbin-external-domain
spec:
type: ExternalName
externalName: httpbin.org
---
apiVersion: apisix.apache.org/v1alpha1
kind: PluginConfig
metadata:
namespace: aic
name: elasticsearch-logger-plugin-config
spec:
plugins:
- name: elasticsearch-logger
config:
endpoint_addrs:
- "http://elasticsearch.aic.svc:9200"
field:
index: gateway
type: logs
include_req_body: true
include_req_body_expr:
- - arg_log_body
- "=="
- "yes"
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
namespace: aic
name: elasticsearch-logger-route
spec:
parentRefs:
- name: apisix
rules:
- matches:
- path:
type: Exact
value: /anything
filters:
- type: ExtensionRef
extensionRef:
group: apisix.apache.org
kind: PluginConfig
name: elasticsearch-logger-plugin-config
backendRefs:
- name: httpbin-external-domain
port: 80
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
namespace: aic
name: httpbin-external-domain
spec:
ingressClassName: apisix
externalNodes:
- type: Domain
name: httpbin.org
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: aic
name: elasticsearch-logger-route
spec:
ingressClassName: apisix
http:
- name: elasticsearch-logger-route
match:
paths:
- /anything
methods:
- GET
upstreams:
- name: httpbin-external-domain
plugins:
- name: elasticsearch-logger
config:
endpoint_addrs:
- "http://elasticsearch.aic.svc:9200"
field:
index: gateway
type: logs
include_req_body: true
include_req_body_expr:
- - arg_log_body
- "=="
- "yes"
Apply the configuration:
kubectl apply -f elasticsearch-logger-ic.yaml
❶ include_req_body: set to true to include request body.
❷ include_req_body_expr: only include request body if the URL query string log_body is true.
Send a request to the route with an URL query string satisfying the condition:
curl -i "http://127.0.0.1:9080/anything?log_body=yes" -X POST -d '{"env": "dev"}'
You should receive an HTTP/1.1 200 OK response.
Navigate to the Kibana dashboard on localhost:5601 and under Discover tab, create a new index pattern gateway to fetch the data from Elasticsearch, if you have not done so already. Once configured, navigate back to the Discover tab and you should see a log generated, similar to the following:
{
"_index": "gateway",
"_type": "logs",
"_id": "Dk-cL5QBOkdYRG7k7DSW",
"_version": 1,
"_score": 1,
"_source": {
"request": {
"headers": {
"user-agent": "curl/8.6.0",
"accept": "*/*",
"content-length": "14",
"host": "127.0.0.1:9080",
"content-type": "application/x-www-form-urlencoded"
},
"size": 182,
"querystring": {
"log_body": "yes"
},
"body": "{\"env\": \"dev\"}",
"method": "POST",
"url": "http://127.0.0.1:9080/anything?log_body=yes",
"uri": "/anything?log_body=yes"
},
"start_time": 1735965595203,
"response": {
"headers": {
"content-type": "application/json",
"server": "APISIX/3.13.0",
"access-control-allow-credentials": "true",
"content-length": "548",
"access-control-allow-origin": "*",
"connection": "close",
"date": "Mon, 13 Jan 2025 11:02:32 GMT"
},
"status": 200,
"size": 776
},
"route_id": "elasticsearch-logger-route",
"latency": 703.9999961853,
"apisix_latency": 34.999996185303,
"upstream_latency": 669,
"upstream": "34.197.122.172:80",
"server": {
"hostname": "0b9a772e68f8",
"version": "3.13.0"
},
"service_id": "",
"client_ip": "192.168.65.1"
},
"fields": {
...
}
}
Send a request to the route without any URL query string:
curl -i "http://127.0.0.1:9080/anything" -X POST -d '{"env": "dev"}'
Navigate to the Kibana dashboard Discover tab and you should see a log generated, but without the request body:
{
"_index": "gateway",
"_type": "logs",
"_id": "EU-eL5QBOkdYRG7kUDST",
"_version": 1,
"_score": 1,
"_source": {
"request": {
"headers": {
"content-type": "application/x-www-form-urlencoded",
"accept": "*/*",
"content-length": "14",
"host": "127.0.0.1:9080",
"user-agent": "curl/8.6.0"
},
"size": 169,
"querystring": {},
"method": "POST",
"url": "http://127.0.0.1:9080/anything",
"uri": "/anything"
},
"start_time": 1735965686363,
"response": {
"headers": {
"content-type": "application/json",
"access-control-allow-credentials": "true",
"server": "APISIX/3.13.0",
"content-length": "510",
"access-control-allow-origin": "*",
"connection": "close",
"date": "Mon, 13 Jan 2025 11:15:54 GMT"
},
"status": 200,
"size": 738
},
"route_id": "elasticsearch-logger-route",
"latency": 680.99999427795,
"apisix_latency": 4.9999942779541,
"upstream_latency": 676,
"upstream": "34.197.122.172:80",
"server": {
"hostname": "0b9a772e68f8",
"version": "3.13.0"
},
"service_id": "",
"client_ip": "192.168.65.1"
},
"fields": {
...
}
}
If you have customized the log_format in addition to setting include_req_body or include_resp_body to true, the plugin would not include the bodies in the logs.
As a workaround, you may be able to use the NGINX variable $request_body in the log format, such as:
{
"elasticsearch-logger": {
...,
"log_format": {"body": "$request_body"}
}
}
Include Request Date in Elasticsearch Index
The following example demonstrates how you can configure the elasticsearch-logger plugin to include the request date in Elasticsearch index.
Create a route with elasticsearch-logger as follows:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "elasticsearch-logger-route",
"uri": "/anything",
"plugins": {
"elasticsearch-logger": {
"endpoint_addrs": ["http://elasticsearch:9200"],
"field": {
"index": "api7-{%Y.%m.%d}",
"type": "logs"
}
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
}
}'
services:
- name: httpbin
routes:
- uris:
- /anything
name: elasticsearch-logger-route
plugins:
elasticsearch-logger:
endpoint_addrs:
- "http://elasticsearch:9200"
field:
index: "api7-{%Y.%m.%d}"
type: logs
upstream:
type: roundrobin
nodes:
- host: httpbin.org
port: 80
weight: 1
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
- Gateway API
- APISIX CRD
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: httpbin-external-domain
spec:
type: ExternalName
externalName: httpbin.org
---
apiVersion: apisix.apache.org/v1alpha1
kind: PluginConfig
metadata:
namespace: aic
name: elasticsearch-logger-plugin-config
spec:
plugins:
- name: elasticsearch-logger
config:
endpoint_addrs:
- "http://elasticsearch.aic.svc:9200"
field:
index: "api7-{%Y.%m.%d}"
type: logs
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
namespace: aic
name: elasticsearch-logger-route
spec:
parentRefs:
- name: apisix
rules:
- matches:
- path:
type: Exact
value: /anything
filters:
- type: ExtensionRef
extensionRef:
group: apisix.apache.org
kind: PluginConfig
name: elasticsearch-logger-plugin-config
backendRefs:
- name: httpbin-external-domain
port: 80
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
namespace: aic
name: httpbin-external-domain
spec:
ingressClassName: apisix
externalNodes:
- type: Domain
name: httpbin.org
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: aic
name: elasticsearch-logger-route
spec:
ingressClassName: apisix
http:
- name: elasticsearch-logger-route
match:
paths:
- /anything
methods:
- GET
upstreams:
- name: httpbin-external-domain
plugins:
- name: elasticsearch-logger
config:
endpoint_addrs:
- "http://elasticsearch.aic.svc:9200"
field:
index: "api7-{%Y.%m.%d}"
type: logs
Apply the configuration:
kubectl apply -f elasticsearch-logger-ic.yaml
❶ Configure the endpoint address to Elasticsearch.
❷ Configure the index field to use the current year, month, and date.
❸ Configure the type field as logs.
Send a request to the route to generate a log entry:
curl -i "http://127.0.0.1:9080/anything"
You should receive an HTTP/1.1 200 OK response.
Navigate to the Kibana dashboard on localhost:5601 and under Discover tab, create a new index pattern api7* to fetch the data from Elasticsearch. Once configured, navigate back to the Discover tab and you should see a log generated, similar to the following:
{
"_index": "api7-2025.3.10",
"_type": "logs",
"_id": "CE-KL5QB0kdYRG7dEiTJ",
"_version": 1,
"_score": 1,
"_source": {
"request": {
...
},
"response": {
...
},
"status": 200,
"size": 618
},
...
}