Proxy OpenAI Requests
OpenAI provides access to state-of-the-art AI models, such as GPT-3, for various applications including natural language processing, text generation, and more. Integrating OpenAI's APIs into your applications can unlock powerful capabilities for text analysis, content generation, and other AI-driven tasks.
APISIX provides capabilities for secret management, response streaming, rate limiting, and more, making it an excellent choice for proxying requests from OpenAI's API endpoints.
This guide will show you how to configure APISIX to integrate with OpenAI APIs while using proxy-rewrite
to attach API key to every request.
Prerequisite(s)
- Install Docker.
- Install cURL to send requests to the services for validation.
- Follow the Getting Started Tutorial to start a new APISIX instance in Docker or on Kubernetes.
Obtain an OpenAI API Key
Create an OpenAI account and an API key before proceeding. You can optionally save the key to an environment variable as such:
export OPENAI_API_KEY=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 # replace with your API key
Create a Route to OpenAI API
Create a route to OpenAI's chat endpoint and use proxy-rewrite
plugin to attach the API key to request headers:
- Admin API
- ADC
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT -d '{
"id": "openai-chat",
"uri": "/v1/chat/completions",
"plugins": {
"proxy-rewrite": {
"headers": {
"set": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
}
}
},
"upstream": {
"scheme": "https",
"nodes": {
"api.openai.com:443": 1
},
"type": "roundrobin"
}
}'
❶ Configure the route to OpenAI API's chat endpoint. You can adjust the endpoint per your need.
❷ Attach OpenAPI API key to Authorization
request header.
❸ Set the scheme to HTTPS.
❹ Set the upstream node to OpenAI's API domain.
services:
- name: OpenAI Service
routes:
- uris:
- /v1/chat/completions
name: openai-chat
plugins:
proxy-rewrite:
headers:
set:
Authorization: "Bearer '${OPENAI_API_KEY}'"
upstream:
scheme: https
type: roundrobin
nodes:
- host: api.openai.com
port: 443
weight: 1
❶ Configure the route to OpenAI API's chat endpoint. You can adjust the endpoint per your need.
❷ Attach OpenAPI API key to Authorization
request header.
❸ Set the scheme to HTTPS.
❹ Set the upstream node to OpenAI's API domain.
Synchronize the configuration to APISIX:
adc sync -f adc.yaml
Create a Kubernetes manifest file to configure a route using the ApisixRoute custom resource:
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: openai-route
namespace: ingress-apisix
spec:
http:
- name: openai-route
match:
paths:
- /v1/chat/completions*
upstreams:
- name: openai-upstream
plugins:
- name: proxy-rewrite
enable: true
config:
headers:
Authorization: "Bearer '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'"
❶ Configure the route to OpenAI API's chat endpoint. You can adjust the endpoint per your need.
❷ Attach OpenAPI API key to the Authorization
header. Alternatively, you can use Kubernetes secrets, Helm values file, or kustomize to handle environment variables.
Create another Kubernetes manifest file to configure how the upstream should pass the host header using the ApisixUpstream custom resource:
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
name: openai-upstream
namespace: ingress-apisix
spec:
scheme: https
passHost: node
loadbalancer:
type: roundrobin
externalNodes:
- type: Domain
name: api.openai.com
port: 443
❶ Set the upstream scheme to HTTPS.
❷ Set the upstream node to OpenAI's API domain and the listening port to 443
.
Apply the configurations to your cluster:
kubectl apply -f openai-route.yaml -f openai-upstream.yaml
Verify
- Admin API
- ADC
- Ingress Controller
First, expose the service port to your local machine by port forwarding:
kubectl port-forward svc/apisix-gateway 9080:80 &
The command above runs in the background and maps port 80
of the apisix-gateway
service to port 9080
on the local machine.
Send a POST request to the route with a question in the request body:
curl "http://127.0.0.1:9080/v1/chat/completions" -X POST \
-H "Host: api.openai.com:443" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a computer scientist."
},
{
"role": "user",
"content": "Explain in one sentence what a Turing machine is."
}
]
}'
You should see a response similar to the following:
{
...,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "A Turing machine is an abstract mathematical model that manipulates symbols on an infinite tape according to a set of rules, representing the concept of a general-purpose computer."
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}
See OpenAI's API specifications for more information about how to compose the request.
Next Steps
You have now learned how to integrate APISIX with OpenAI. See OpenAI's API reference to learn more about OpenAI's capabilities.
If you would like to integrate with OpenAI's streaming API, you can use the proxy-buffering
plugin to disable NGINX's proxy_buffering
directive to avoid server-sent events (SSE) being buffered.
In addition, you can integrate more capabilities that APISIX offers, such as rate limiting and caching, to improve system availability and user experience.