Skip to main content

Version: 3.11.0

Proxy OpenAI Requests

OpenAI provides access to state-of-the-art AI models, such as GPT-3, for various applications including natural language processing, text generation, and more. Integrating OpenAI's APIs into your applications can unlock powerful capabilities for text analysis, content generation, and other AI-driven tasks.

APISIX provides capabilities for secret management, response streaming, rate limiting, and more, making it an excellent choice for proxying requests from OpenAI's API endpoints.

This guide will show you how to configure APISIX to integrate with OpenAI APIs to proxy user requests and model responses.

Prerequisite(s)

  • Install Docker.
  • Install cURL to send requests to the services for validation.
  • Follow the Getting Started Tutorial to start a new APISIX instance in Docker or on Kubernetes.

Obtain an OpenAI API Key

Create an OpenAI account and an API key before proceeding. You can optionally save the key to an environment variable as such:

export OPENAI_API_KEY=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   # replace with your API key

Create a Route to OpenAI API

There are two main approaches to configure this route:

  • Create a route to the OpenAI endpoint and attach API key using proxy-rewrite.
  • Create a route with the ai-proxy plugin, in which you can configrue model parameters.

Using proxy-rewrite

Create a route to OpenAI's chat endpoint and use proxy-rewrite plugin to attach the API key to request headers:

curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT -d '{
"id": "openai-chat",
"uri": "/v1/chat/completions",
"plugins": {
"proxy-rewrite": {
"headers": {
"set": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
}
}
},
"upstream": {
"scheme": "https",
"nodes": {
"api.openai.com:443": 1
},
"type": "roundrobin"
}
}'

❶ Configure the route to OpenAI API's chat endpoint. You can adjust the endpoint per your need.

❷ Attach OpenAPI API key to Authorization request header.

❸ Set the scheme to HTTPS.

❹ Set the upstream node to OpenAI's API domain.

Using ai-proxy

note

This feature is currently only available in Enterprise and will be made available in open-source APISIX in 3.12.0 release.

Create a route with the ai-proxy plugin as such:

curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT -d '{
"id": "openai-chat",
"uri": "/anything",
"plugins": {
"ai-proxy": {
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"model": {
"provider": "openai",
"name": "gpt-3.5-turbo"
}
}
},
"upstream": {
"nodes": {
"httpbin.org": 1
},
"type": "roundrobin"
}
}'

❶ Attach OpenAPI API key to Authorization request header.

❷ Set the provider to openai, which will proxy requests to the OpenAI endpoint.

❸ Set the model to gpt-3.5-turbo.

passthrough is set to false by default in ai-proxy, which means the model response will not be forwarded to the upstream node and the node can be any value.

Verify

If you have configured the route with proxy-rewrite, send the following request to the route:

curl "http://127.0.0.1:9080/v1/chat/completions" -X POST \
-H "Content-Type: application/json" \
-H "Host: api.openai.com:443" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a computer scientist."
},
{
"role": "user",
"content": "Explain in one sentence what a Turing machine is."
}
]
}'

If you have configured the route with ai-proxy, send the following request to the route:

curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-H "Host: api.openai.com:443" \
-d '{
"messages": [
{
"role": "system",
"content": "You are a computer scientist."
},
{
"role": "user",
"content": "Explain in one sentence what a Turing machine is."
}
]
}'

In both cases, you should receive a response similar to the following:

{
...,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "A Turing machine is an abstract mathematical model that manipulates symbols on an infinite tape according to a set of rules, representing the concept of a general-purpose computer."
},
"logprobs": null,
"finish_reason": "stop"
}
],
...
}

See OpenAI's API specifications for more information about how to compose the request.

Next Steps

You have now learned how to integrate APISIX with OpenAI. See OpenAI's API reference to learn more about OpenAI's capabilities.

If you would like to integrate with OpenAI's streaming API, you can use the proxy-buffering plugin to disable NGINX's proxy_buffering directive to avoid server-sent events (SSE) being buffered.

In addition, you can integrate more capabilities that APISIX offers, such as rate limiting and caching, to improve system availability and user experience.


API7.ai Logo

API Management for Modern Architectures with Edge, API Gateway, Kubernetes, and Service Mesh.

Product

API7 Cloud

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN PTE. LTD 2019 – 2025. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the

Apache Software Foundation