mqtt-proxy
The mqtt-proxy plugin is an L4 plugin that supports proxying and load balancing MQTT requests to MQTT servers. It supports MQTT versions 3.1.x and 5.0. The plugin must be configured on a stream route, and APISIX should enable L4 traffic proxying.
Examples
By default, APISIX only proxies L7 traffic. Before proceeding to examples, first ensure that you enable L4 traffic proxying in APISIX.
Update the configuration file as follows to enable L4 traffic proxying:
apisix:
proxy_mode: http&stream # Enable both L4 & L7 proxies
stream_proxy: # Configure L4 proxy
tcp:
- 9100 # Set TCP proxy listening port
Reload APISIX for changes to take effect. APISIX should now start listening L4 traffic on port 9100.
The examples below use an MQTT client from the Mosquitto project to publish and subscribe messages. You can download it here or use any other MQTT client of your choice.
Proxy to a MQTT Broker
The following example demonstrates how you can configure a stream route to proxy traffic to a hosted MQTT server and verify the APISIX can proxy MQTT messages successfully.
Create a stream route to the MQTT server and configure the mqtt-proxy plugin:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/stream_routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "mqtt-route",
"plugins": {
"mqtt-proxy": {
"protocol_name": "MQTT",
"protocol_level": 4
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"test.mosquitto.org:1883": 1
}
}
}'
services:
- name: mqtt-service
upstream:
name: default
scheme: tcp
nodes:
- host: test.mosquitto.org
port: 1883
weight: 1
stream_routes:
- name: mqtt-route
server_port: 9100
plugins:
mqtt-proxy:
protocol_name: MQTT
protocol_level: 4
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
- Gateway API
- APISIX CRD
Attaching L4 plugins is currently not supported with Gateway API. At the moment, this example cannot be completed with Gateway API.
Use APISIX CRD to attach the mqtt-proxy plugin to the stream route:
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: mqtt-broker
spec:
type: ExternalName
externalName: test.mosquitto.org
ports:
- name: mqtt
port: 1883
targetPort: 1883
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: aic
name: mqtt-route
spec:
ingressClassName: apisix
stream:
- name: mqtt-route
protocol: TCP
match:
ingressPort: 9100
backend:
serviceName: mqtt-broker
servicePort: 1883
plugins:
- name: mqtt-proxy
enable: true
config:
protocol_name: MQTT
protocol_level: 4
Apply the configuration:
kubectl apply -f mqtt-proxy-ic.yaml
Open two terminal sessions. In the first one, subscribe to the test topic:
mosquitto_sub -h test.mosquitto.org -p 1883 -t "test/apisix"
In the other one, publish a sample message to the created route:
mosquitto_pub -h 127.0.0.1 -p 9100 -t "test/apisix" -m "Hello APISIX"
You should see the message Hello APISIX in the first terminal.
Load Balance MQTT Traffic
The following example demonstrates how you can configure a stream route to load balance MQTT traffic to different MQTT servers.
When the plugin is enabled, it registers a variable mqtt_client_id which can be used for load balancing. MQTT connections with different client ID will be forwarded to different upstream nodes based on the consistent hash algorithm. If the client ID is missing, client IP will be used instead.
Create a stream route to two MQTT servers and configure the mqtt-proxy plugin:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/stream_routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "mqtt-route",
"plugins": {
"mqtt-proxy": {
"protocol_name": "MQTT",
"protocol_level": 4
}
},
"upstream": {
"type": "chash",
"key": "mqtt_client_id",
"nodes": [
{
"host": "test.mosquitto.org",
"port": 1883,
"weight": 1
},
{
"host": "broker.mqtt.cool",
"port": 1883,
"weight": 1
}
]
}
}'
services:
- name: mqtt-service
upstream:
name: default
scheme: tcp
type: chash
key: mqtt_client_id
nodes:
- host: test.mosquitto.org
port: 1883
weight: 1
- host: broker.mqtt.cool
port: 1883
weight: 1
stream_routes:
- name: mqtt-route
server_port: 9100
plugins:
mqtt-proxy:
protocol_name: MQTT
protocol_level: 4
Synchronize the configuration to the gateway:
adc sync -f adc.yaml
- Gateway API
- APISIX CRD
Attaching L4 plugins is currently not supported with Gateway API. At the moment, this example cannot be completed with Gateway API.
apiVersion: v1
kind: Service
metadata:
namespace: aic
name: mqtt-brokers
spec:
ports:
- name: mqtt
port: 1883
protocol: TCP
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
namespace: aic
name: mqtt-brokers-1
labels:
kubernetes.io/service-name: mqtt-brokers
addressType: FQDN
ports:
- name: mqtt
protocol: TCP
port: 1883
endpoints:
- addresses:
- test.mosquitto.org
- addresses:
- broker.mqtt.cool
---
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
namespace: aic
name: mqtt-brokers
spec:
ingressClassName: apisix
loadbalancer:
type: chash
key: mqtt_client_id
hashOn: vars
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: aic
name: mqtt-route
spec:
ingressClassName: apisix
stream:
- name: mqtt-route
protocol: TCP
match:
ingressPort: 9100
backend:
serviceName: mqtt-brokers
servicePort: 1883
plugins:
- name: mqtt-proxy
enable: true
config:
protocol_name: MQTT
protocol_level: 4
Apply the configuration:
kubectl apply -f mqtt-proxy-ic.yaml
For the Admin API and ADC examples, open three terminal sessions. In the first one, subscribe to the test topic in the first MQTT broker:
mosquitto_sub -h test.mosquitto.org -p 1883 -t "test/apisix"
In the second terminal, subscribe to the same topic in the second MQTT broker:
mosquitto_sub -h broker.mqtt.cool -p 1883 -t "test/apisix"
In the third terminal, publish messages with different MQTT client IDs to the created route:
mosquitto_pub -h 127.0.0.1 -p 9100 -i publisher-1 -t "test/apisix" -m "Hello from publisher-1"
mosquitto_pub -h 127.0.0.1 -p 9100 -i publisher-2 -t "test/apisix" -m "Hello from publisher-2"
You should see the published messages in the subscriber terminals, verifying that different mqtt_client_id values can be steered to different upstream brokers.