Skip to main content

Version: 3.10.0

Proxy Azure OpenAI Requests

Azure OpenAI Service is a fully managed service that provides unified REST API access to OpenAI's language models, such as GPT-4 and GPT-3.5-Turbo. They can be easily integrated into applications to add capabilities such as content generation, text completion, semantic search, and more.

This guide will walk you through the process of integrating APISIX with Azure OpenAI Service.

Prerequisite(s)

Request Access to Azure OpenAI Service

Before proceeding, you should first request access to Azure OpenAI Service as part of Microsoft's commitment to responsible AI by filling out a registration form.

Please request the access for GPT-3.5, GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and/or Embeddings Models to follow along.

request model access

Deploy an Azure OpenAI Service

Once the access is granted, search for Azure AI services, navigate to the Azure OpenAI in the left panel, and click Create Azure OpenAI:

Create Azure OpenAI Service

Fill out the project and instance details:

fill out information for the new instance

In the Network tab, select the All networks option, or adjust accordingly per your infrastructure:

select all networks as the network option

Continue with the setup until the deployment is complete:

deployment is complete

Obtain API Information

Go to the Azure OpenAI Studio and click into Chat playground:

Azure OpenAI Studio chat playground

You should see the playground where you can adjust model parameters and interact with the model:

interact with model

To obtain the API endpoint, API key, and sample query, select View code:

view code

You should your previous interaction with the model converted to code, as well as API information:

sample code and API information

Note down the API key and sample curl query.

Create a Route in APISIX

You can optionally save your API key to an environment variable:

export AZ_OPENAI_API_KEY=57cha9ee8e8a89a12c0aha174f180f4   # replace with your API key

Create a route to the Azure API endpoint and use proxy-rewrite plugin to attach the API key to request headers:

curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT -d '{
"id": "azure-openai-route",
"uri": "/openai/deployments/api7-docs/chat/completions*",
"plugins": {
"proxy-rewrite": {
"headers": {
"set": {
"api-key": "'"$AZ_OPENAI_API_KEY"'"
}
}
}
},
"upstream": {
"scheme": "https",
"pass_host": "node",
"nodes": {
"api7-azure-openai.openai.azure.com:443": 1
},
"type": "roundrobin"
}
}'

❶ Configure the route to Azure API's chat endpoint.

❷ Attach the API key to the request in the api-key header.

❸ Set the scheme to HTTPS.

❹ Set the upstream node to Azure's API domain and the listening port to 443.

Verify

Send a POST request to the route with a sample question in the request body:

curl "http://127.0.0.1:9080/openai/deployments/api7-docs/chat/completions?api-version=2024-02-15-preview" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "system",
"content": "You are an AI assistant that helps people find information."
},
{
"role": "user",
"content": "Write me a 50-word introduction for Apache APISIX."
}
],
"max_tokens": 800,
"temperature": 0.7,
"frequency_penalty": 0,
"presence_penalty": 0,
"top_p": 0.95,
"stop": null
}'

You should receive a response similar to the following:

{
"choices": [
{
...,
"message": {
"content": "Apache APISIX is a modern, cloud-native API gateway built to handle high-performance and low-latency use cases. It offers a wide range of features, including load balancing, rate limiting, authentication, and dynamic routing, making it an ideal choice for microservices and cloud-native architectures.",
"role": "assistant"
}
}
],
...
}

Next Steps

You have now learned how to integrate APISIX with Azure OpenAI Service. See Azure OpenAI Service REST API reference to learn more.

If you would like to stream the Azure API response, you can set the stream parameter to true for Azure OpenAI Service and use the proxy-buffering plugin in APISIX to disable NGINX's proxy_buffering directive, which avoids the server-sent events (SSE) being buffered.

In addition, you can integrate more capabilities that APISIX offers, such as rate limiting and caching, to improve system availability and user experience.


API7.ai Logo

API Management for Modern Architectures with Edge, API Gateway, Kubernetes, and Service Mesh.

Product

API7 Cloud

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN Ltd. 2019 – 2024. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the

Apache Software Foundation