Skip to main content

Version: 3.9.x

Proxy Your First LLM Request in 5 Minutes

This guide walks you through proxying your first LLM request through API7 Gateway. You will create a route that forwards chat completion requests to OpenAI, with the gateway handling authentication and traffic management.

Prerequisites

  • Install Docker.

  • Install cURL to send requests to the services for validation.

  • Have a running API7 Gateway instance.

  • Create a token from the Dashboard and save it to an environment variable:

    export API_KEY=your-dashboard-token   # replace with your Dashboard token
  • Replace {gateway_group_id} with your gateway group ID. Use default if you are following the quickstart.

Obtain an OpenAI API Key

Create an OpenAI account and generate an API key. Save the key to an environment variable:

export OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx   # replace with your OpenAI API key

Create or Reuse a Service

If you are following the Admin API examples in this guide and the other AI Gateway guides, you need a service_id.

curl -k "https://localhost:7443/apisix/admin/services/ai-gateway-service?gateway_group_id={gateway_group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "ai-gateway-service"
}'

This request creates a placeholder service with ID ai-gateway-service. Reuse it in later Admin API examples:

export SERVICE_ID=ai-gateway-service

If you already have a service you want to attach AI Gateway routes to, use that service ID for the Admin API examples instead.

Create a Route with AI Proxy

Create a route with the ai-proxy plugin to forward requests to OpenAI:

curl -k "https://localhost:7443/apisix/admin/routes?gateway_group_id={gateway_group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-d '{
"id": "ai-gateway-quickstart",
"service_id": "'"$SERVICE_ID"'",
"paths": ["/ai"],
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4o"
}
}
}
}'

❶ Set the provider to openai.

❷ Attach your OpenAI API key in the Authorization header. The gateway injects this header on every request so that clients do not need to manage API keys.

❸ Set the default model to gpt-4o. Clients can override this by including a model field in the request body.

Send a Chat Completion Request

Send a request to the route:

curl "http://127.0.0.1:9080/ai" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What is an API gateway in one sentence?" }
]
}'

You should receive a response similar to the following:

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4o-2024-08-06",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "An API gateway is a server that acts as a single entry point for API requests, handling routing, authentication, rate limiting, and other cross-cutting concerns."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 32,
"total_tokens": 57
}
}

The gateway intercepted the request, injected the OpenAI API key, forwarded it to OpenAI, and returned the response to the client. Your application never needs to handle LLM provider credentials directly.

Next Steps

Now that you have proxied your first LLM request, explore the full capabilities of API7 AI Gateway:

API7.ai Logo

The digital world is connected by APIs,
API7.ai exists to make APIs more efficient, reliable, and secure.

Sign up for API7 newsletter

Product

API7 Gateway

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN PTE. LTD 2019 – 2026. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the Apache Software Foundation