error-log-logger
The error-log-logger plugin pushes APISIX's error logs (error.log) to TCP, Apache SkyWalking, Apache Kafka, or ClickHouse servers, in batches. You can specify the severity level of which the plugin should send the corresponding logs.
The plugin is disabled by default. Once enabled, it will automatically start pushing error logs to remote servers. You should configure remote server details in plugin metadata only, instead of on other resources, such as routes.
Examples
The examples below demonstrate how you can configure error-log-logger plugin for different scenarios.
If you are using API7 Enterprise, the plugin is enabled by default. If you are using APISIX, the error-log-logger plugin is disabled by default. To enable the plugin, add the plugin to your configuration file as such:
plugins:
- ...
- error-log-logger
Reload APISIX for changes to take effect.
Send Logs to TCP Server
The following example demonstrates how you can configure the error-log-logger plugin to send error logs to a TCP server.
Start a TCP server listening on port 19000:
- Docker
- Kubernetes
nc -l 19000
Create a Kubernetes manifest for a TCP server deployment using socat:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aic
name: tcp-server
spec:
replicas: 1
selector:
matchLabels:
app: tcp-server
template:
metadata:
labels:
app: tcp-server
spec:
containers:
- name: tcp-server
image: alpine/socat
args: ["TCP-LISTEN:19000,fork,reuseaddr", "STDOUT"]
ports:
- containerPort: 19000
---
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: tcp-server
spec:
selector:
app: tcp-server
ports:
- name: tcp
port: 19000
targetPort: 19000
type: ClusterIP
Apply the manifests:
kubectl apply -f tcp-server.yaml
Configure the plugin metadata for error-log-logger:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"tcp": {
"host": "192.168.2.103",
"port": 19000
},
"level": "INFO"
}'
plugin_metadata:
- name: error-log-logger
tcp:
host: "192.168.2.103"
port: 19000
level: INFO
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
namespace: aic
name: apisix-config
spec:
provider:
type: ControlPlane
controlPlane:
service:
name: apisix-admin
port: 9180
auth:
type: AdminKey
adminKey:
value: edd1c9f034335f136f87ad84b625c8f1
pluginMetadata:
error-log-logger:
tcp:
host: "tcp-server.aic.svc"
port: 19000
level: INFO
Apply the configuration:
kubectl apply -f error-log-logger-metadata.yaml
❶ Configure the host to the TCP server address.
❷ Configure the port to your TCP server listening port.
❸ Configure the severity level to INFO so most logs would be sent, for easier verification.
To verify, you can manually generate a log at warn level by reloading APISIX.
If you are using Docker, in the terminal session where netcat is listening, you should see a log entry. If you are using Kubernetes, check the tcp-server pod logs:
kubectl logs -n aic -l app=tcp-server
You should see a log entry similar to the following:
2025/01/26 20:15:29 [warn] 211#211: *35552 [lua] plugin.lua:205: load(): new plugins: {"cas-auth":true,"real-ip":true,"ai":true,"client-control":true,"proxy-control":true,"request-id":true,"zipkin":true,"ext-plugin-pre-req":true,"fault-injection":true,"mocking":true,"serverless-pre-function":true,"cors":true,"ip-restriction":true,"ua-restriction":true,"referer-restriction":true,"csrf":true,"uri-blocker":true,"request-validation":true,"chaitin-waf":true,"multi-auth":true,"openid-connect":true,"authz-casbin":true,"authz-casdoor":true,"wolf-rbac":true,"ldap-auth":true,"hmac-auth":true,"basic-auth":true,"jwt-auth":true,"redirect":true,"key-auth":true,"consumer-restriction":true,"attach-consumer-label":true,"authz-keycloak":true,"proxy-cache":true,"body-transformer":true,"ai-prompt-template":true,"ai-prompt-decorator":true,"proxy-mirror":true,"proxy-rewrite":true,"workflow":true,"api-breaker":true,"ai-proxy":true,"limit-conn":true,"limit-count":true,"limit-req":true,"gzip":true,"server-info":true,"traffic-split":true,"response-rewrite":true,"degraphql":true,"kafka-proxy":true,"grpc-transcode":true,"grpc-web":true,"http-dubbo":true,"public-api":true,"prometheus":true,"datadog":true,"loki-logger":true,"elasticsearch-logger":true,"echo":true,"loggly":true,"http-logger":true,"splunk-hec-logging":true,"skywalking-logger":true,"google-cloud-logging":true,"sls-logger":true,"tcp-logger":true,"kafka-logger":true,"rocketmq-logger":true,"syslog":true,"udp-logger":true,"file-logger":true,"clickhouse-logger":true,"tencent-cloud-cls":true,"inspect":true,"example-plugin":true,"aws-lambda":true,"azure-functions":true,"openwhisk":true,"openfunction":true,"error-log-logger":true,"ext-plugin-post-req":true,"ext-plugin-post-resp":true,"serverless-post-function":true,"opa":true,"forward-auth":true,"jwe-decrypt":true}, context: init_worker_by_lua*
Send Logs to SkyWalking
The following example demonstrates how you can configure the error-log-logger plugin to send error logs to SkyWalking.
Set up SkyWalking OAP server:
- Docker
- Kubernetes
Start a SkyWalking storage, OAP and Booster UI with Docker Compose, following Skywalking's documentation. Once set up, the OAP server should be listening on 12800 and you should be able to access the UI at http://localhost:8080.
Create a Kubernetes manifest for the SkyWalking OAP server:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aic
name: skywalking-oap
spec:
replicas: 1
selector:
matchLabels:
app: skywalking-oap
template:
metadata:
labels:
app: skywalking-oap
spec:
containers:
- name: skywalking-oap
image: apache/skywalking-oap-server:10.1.0
env:
- name: SW_STORAGE
value: H2
ports:
- containerPort: 11800
- containerPort: 12800
---
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: skywalking-oap
spec:
selector:
app: skywalking-oap
ports:
- name: grpc
port: 11800
targetPort: 11800
- name: http
port: 12800
targetPort: 12800
type: ClusterIP
Apply the manifests:
kubectl apply -f skywalking-oap.yaml
Wait for the OAP server to become ready:
kubectl wait --for=condition=available --timeout=120s -n aic deployment/skywalking-oap
Configure the plugin metadata for error-log-logger:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"skywalking": {
"endpoint_addr": "http://192.168.2.103:12800/v3/logs"
},
"level": "INFO"
}'
plugin_metadata:
- name: error-log-logger
skywalking:
endpoint_addr: "http://192.168.2.103:12800/v3/logs"
level: INFO
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
namespace: aic
name: apisix-config
spec:
provider:
type: ControlPlane
controlPlane:
service:
name: apisix-admin
port: 9180
auth:
type: AdminKey
adminKey:
value: edd1c9f034335f136f87ad84b625c8f1
pluginMetadata:
error-log-logger:
skywalking:
endpoint_addr: "http://skywalking-oap.aic.svc:12800/v3/logs"
level: INFO
Apply the configuration:
kubectl apply -f error-log-logger-metadata.yaml
❶ Configure the endpoint address to the SkyWalking server.
❷ Configure the severity level to INFO so most logs would be sent, for easier verification.
To verify, you can manually generate a log at warn level by reloading APISIX.
In Skywalking UI, navigate to General Service > Services. You should see a service called APISIX with the following log entry:
2025/01/27 07:40:06 [warn] 211#211: *35552 [lua] plugin.lua:205: load(): new plugins: {"cas-auth":true,"real-ip":true,"ai":true,"client-control":true,"proxy-control":true,"request-id":true,"zipkin":true,"ext-plugin-pre-req":true,"fault-injection":true,"mocking":true,"serverless-pre-function":true,"cors":true,"ip-restriction":true,"ua-restriction":true,"referer-restriction":true,"csrf":true,"uri-blocker":true,"request-validation":true,"chaitin-waf":true,"multi-auth":true,"openid-connect":true,"authz-casbin":true,"authz-casdoor":true,"wolf-rbac":true,"ldap-auth":true,"hmac-auth":true,"basic-auth":true,"jwt-auth":true,"redirect":true,"key-auth":true,"consumer-restriction":true,"attach-consumer-label":true,"authz-keycloak":true,"proxy-cache":true,"body-transformer":true,"ai-prompt-template":true,"ai-prompt-decorator":true,"proxy-mirror":true,"proxy-rewrite":true,"workflow":true,"api-breaker":true,"ai-proxy":true,"limit-conn":true,"limit-count":true,"limit-req":true,"gzip":true,"server-info":true,"traffic-split":true,"response-rewrite":true,"degraphql":true,"kafka-proxy":true,"grpc-transcode":true,"grpc-web":true,"http-dubbo":true,"public-api":true,"prometheus":true,"datadog":true,"loki-logger":true,"elasticsearch-logger":true,"echo":true,"loggly":true,"http-logger":true,"splunk-hec-logging":true,"skywalking-logger":true,"google-cloud-logging":true,"sls-logger":true,"tcp-logger":true,"kafka-logger":true,"rocketmq-logger":true,"syslog":true,"udp-logger":true,"file-logger":true,"clickhouse-logger":true,"tencent-cloud-cls":true,"inspect":true,"example-plugin":true,"aws-lambda":true,"azure-functions":true,"openwhisk":true,"openfunction":true,"error-log-logger":true,"ext-plugin-post-req":true,"ext-plugin-post-resp":true,"serverless-post-function":true,"opa":true,"forward-auth":true,"jwe-decrypt":true}, context: init_worker_by_lua*
You should also observe logs at other severity levels, such as error, emerg, and info, when they are generated.
Send Logs to ClickHouse
The following example demonstrates how you can configure the error-log-logger plugin to send error logs to ClickHouse.
Start a sample ClickHouse server with user default and empty password:
- Docker
- Kubernetes
docker run -d -p 8123:8123 -p 9000:9000 -p 9009:9009 --name clickhouse-server clickhouse/clickhouse-server
In ClickHouse database default, create a table named default_logs with a data column. Note that the data column is expected by the plugin to push logs to.
curl "http://127.0.0.1:8123" -X POST -d '
CREATE TABLE default.default_logs (
data String,
PRIMARY KEY(`data`)
)
ENGINE = MergeTree()
' --user default:
Create a Kubernetes manifest file for the ClickHouse deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: aic
name: clickhouse-server
spec:
replicas: 1
selector:
matchLabels:
app: clickhouse-server
template:
metadata:
labels:
app: clickhouse-server
spec:
containers:
- name: clickhouse-server
image: clickhouse/clickhouse-server
ports:
- containerPort: 8123
- containerPort: 9000
- containerPort: 9009
Create a Kubernetes manifest file for the ClickHouse service:
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: clickhouse-server
spec:
selector:
app: clickhouse-server
ports:
- name: http
port: 8123
targetPort: 8123
- name: native
port: 9000
targetPort: 9000
type: ClusterIP
Apply the manifests:
kubectl apply -f clickhouse-deployment.yaml -f clickhouse-service.yaml
In ClickHouse database default, create a table named default_logs with a data column. Note that the data column is expected by the plugin to push logs to.
kubectl exec -n aic deploy/clickhouse-server -- clickhouse-client --query "
CREATE TABLE default.default_logs (
data String,
PRIMARY KEY(data)
)
ENGINE = MergeTree()
"
Configure the plugin metadata for error-log-logger:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"clickhouse": {
"endpoint_addr": "http://192.168.2.103:8123",
"user": "default",
"password": "",
"database": "default",
"logtable": "default_logs"
},
"level": "INFO"
}'
plugin_metadata:
- name: error-log-logger
clickhouse:
endpoint_addr: "http://192.168.2.103:8123"
user: default
password: ""
database: default
logtable: default_logs
level: INFO
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
apiVersion: apisix.apache.org/v1alpha1
kind: GatewayProxy
metadata:
namespace: aic
name: apisix-config
spec:
provider:
type: ControlPlane
controlPlane:
service:
name: apisix-admin
port: 9180
auth:
type: AdminKey
adminKey:
value: edd1c9f034335f136f87ad84b625c8f1
pluginMetadata:
error-log-logger:
clickhouse:
endpoint_addr: "http://clickhouse-server.aic.svc:8123"
user: default
password: ""
database: default
logtable: default_logs
level: INFO
Apply the configuration:
kubectl apply -f error-log-logger-metadata.yaml
❶ Replace with your ClickHouse server address.
❷ Set the username to default.
❸ Set the password to empty.
❹ Set the database to default.
❺ Set the database table to default_logs.
❻ Configure the severity level to INFO so most logs would be sent, for easier verification.
To verify, you can manually generate a log at warn level by reloading APISIX.
Send a request to ClickHouse to see the log entries:
echo 'SELECT * FROM default.default_logs FORMAT Pretty' | curl "http://127.0.0.1:8123/?" -d @-
You should see a log entry similar to the following:
2025/01/27 08:21:13 [warn] 353#353: *106572 [lua] plugin.lua:205: load(): new plugins: {"client-control":true,"proxy-control":true,"request-id":true,"zipkin":true,"ext-plugin-pre-req":true,"fault-injection":true,"mocking":true,"serverless-pre-function":true,"cors":true,"ip-restriction":true,"ua-restriction":true,"referer-restriction":true,"csrf":true,"uri-blocker":true,"request-validation":true,"chaitin-waf":true,"multi-auth":true,"openid-connect":true,"authz-casbin":true,"authz-casdoor":true,"wolf-rbac":true,"ldap-auth":true,"hmac-auth":true,"basic-auth":true,"jwt-auth":true,"jwe-decrypt":true,"key-auth":true,"consumer-restriction":true,"attach-consumer-label":true,"forward-auth":true,"opa":true,"authz-keycloak":true,"proxy-cache":true,"body-transformer":true,"ai-prompt-template":true,"ai-prompt-decorator":true,"proxy-mirror":true,"proxy-rewrite":true,"workflow":true,"api-breaker":true,"ai-proxy":true,"limit-conn":true,"limit-count":true,"limit-req":true,"gzip":true,"server-info":true,"traffic-split":true,"response-rewrite":true,"degraphql":true,"kafka-proxy":true,"grpc-transcode":true,"grpc-web":true,"http-dubbo":true,"public-api":true,"error-log-logger":true,"google-cloud-logging":true,"sls-logger":true,"tcp-logger":true,"kafka-logger":true,"rocketmq-logger":true,"syslog":true,"udp-logger":true,"file-logger":true,"clickhouse-logger":true,"tencent-cloud-cls":true,"inspect":true,"example-plugin":true,"aws-lambda":true,"azure-functions":true,"openwhisk":true,"openfunction":true,"serverless-post-function":true,"ext-plugin-post-req":true,"ext-plugin-post-resp":true,"redirect":true,"skywalking-logger":true,"splunk-hec-logging":true,"http-logger":true,"loggly":true,"echo":true,"elasticsearch-logger":true,"cas-auth":true,"prometheus":true,"datadog":true,"loki-logger":true,"real-ip":true,"ai":true}, context: init_worker_by_lua* │