Skip to main content

Version: latest

Proxy Your First LLM Request in 5 Minutes

This guide walks you through proxying your first LLM request through API7 Gateway. You will create a route that forwards chat completion requests to OpenAI, with the gateway handling authentication and traffic management.

Prerequisites

  • Install Docker.

  • Install cURL to send requests to the services for validation.

  • Have a running API7 Enterprise Gateway instance.

  • Obtain the Admin API key. Save it to an environment variable:

    export ADMIN_API_KEY=your-admin-api-key   # replace with your API key
  • Obtain the ID of the service you want to configure. Save it to an environment variable:

    export SERVICE_ID=your-service-id         # replace with your service ID

Obtain an OpenAI API Key

Create an OpenAI account and generate an API key. Save the key to an environment variable:

export OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx   # replace with your API key

Create a Route with AI Proxy

Create a route with the ai-proxy plugin to forward requests to OpenAI:

curl "http://127.0.0.1:7080/apisix/admin/routes?gateway_group_id=default" -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-d '{
"id": "ai-gateway-quickstart",
"service_id": "$SERVICE_ID",
"paths": ["/ai"],
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4o"
}
}
}
}'

❶ Set the provider to openai.

❷ Attach your OpenAI API key in the Authorization header. The gateway injects this header on every request so that clients do not need to manage API keys.

❸ Set the default model to gpt-4o. Clients can override this by including a model field in the request body.

Send a Chat Completion Request

Send a request to the route:

curl "http://127.0.0.1:9080/ai" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What is an API gateway in one sentence?" }
]
}'

You should receive a response similar to the following:

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4o-2024-08-06",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "An API gateway is a server that acts as a single entry point for API requests, handling routing, authentication, rate limiting, and other cross-cutting concerns."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 32,
"total_tokens": 57
}
}

The gateway intercepted the request, injected the OpenAI API key, forwarded it to OpenAI, and returned the response to the client. Your application never needs to handle LLM provider credentials directly.

Next Steps

Now that you have proxied your first LLM request, explore the full capabilities of API7 AI Gateway:

API7.ai Logo

The digital world is connected by APIs,
API7.ai exists to make APIs more efficient, reliable, and secure.

Sign up for API7 newsletter

Product

API7 Gateway

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN PTE. LTD 2019 – 2026. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the Apache Software Foundation