Skip to main content

Version: latest

Transform API Requests with AI-Powered Rewriting

This guide explains how to use ai-request-rewrite to transform API payloads with an LLM at the gateway layer. Unlike static regex or template rewrites, AI-driven transformation can interpret intent in unstructured input and normalize it into upstream-ready formats.

Overview

AI request transformation uses a dedicated LLM call to rewrite, enrich, or restructure payloads while traffic is in-flight through API7 Gateway.

Compared with traditional request transformation, AI-based transformation is useful when input is ambiguous, free-form, multilingual, or semantically complex.

Common use cases include:

  • Format normalization: Convert free-text user input into structured JSON.
  • Language translation: Convert multilingual input into a single target language expected by upstream systems.
  • Data enrichment: Extract and standardize fields such as intent, entities, or categories.
  • Legacy API adaptation: Transform modern client payloads into schemas required by legacy upstream APIs.

Prerequisites

  • Install Docker.
  • Install cURL to send requests to the services for validation.
  • Have a running API7 Enterprise Gateway instance. See the Getting Started Guide for setup instructions.

How It Works

The transformation flow:

  1. Client request enters the gateway.
  2. Gateway calls a transformation LLM using ai-request-rewrite and your prompt instructions.
  3. Gateway receives transformed content and injects it into the request payload.
  4. Modified request is forwarded to the upstream API.

This plugin is also bidirectional:

  • Request-side transformation happens before the upstream is called.
  • Response-side transformation can be applied before data is returned to the client.

The transformation LLM call is separate middleware behavior. It does not replace your primary upstream service.

Configure AI Request Rewriting

The following route uses ai-request-rewrite with OpenAI as the transformation provider.

curl "http://127.0.0.1:7080/apisix/admin/routes?gateway_group_id=default" -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-d '{
"id": "ai-request-transformation",
"service_id": "$SERVICE_ID",
"paths": ["/api/intake"],
"plugins": {
"ai-request-rewrite": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4o"
},
"prompt": "transformation instructions here"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"intake-api.internal:8080": 1
}
}
}'

provider selects the transformation LLM backend. Here it is OpenAI.

auth.header.Authorization sets credentials for the transformation LLM call.

options.model sets the transformation model used for rewriting.

prompt contains explicit transformation instructions. Keep this concise, deterministic, and schema-oriented.

When using provider: "openai-compatible", override.endpoint is required by schema validation.

{
"provider": "openai-compatible",
"override": {
"endpoint": "https://your-llm-endpoint.example/v1/chat/completions"
},
"auth": {
"header": {
"Authorization": "Bearer sk-xxxxxxxx"
}
},
"options": {
"model": "custom-model"
},
"prompt": "transformation instructions here"
}

Example: Normalize Free-Text Input

Scenario: clients send unstructured text, but the upstream expects strict JSON fields.

curl "http://127.0.0.1:9080/api/intake" \
-X POST \
-H "Content-Type: text/plain" \
--data-raw "Hi, I'm Jane Doe, my email is jane@example.com and I can't log in. Please help me reset my password."

Use a transformation prompt such as:

Extract name, email, and intent from the following text and return strict JSON with keys: name, email, intent.
If a field is missing, return null.

Expected transformed payload forwarded upstream:

{
"name": "Jane Doe",
"email": "jane@example.com",
"intent": "password_reset"
}

Example: Translate Request Language

Scenario: clients send multilingual content, but the upstream only accepts English payloads.

curl "http://127.0.0.1:9080/api/intake" -X POST \
-H "Content-Type: application/json" \
-d '{
"message": "Veuillez annuler mon rendez-vous de mercredi prochain et le reporter à vendredi matin."
}'

Use a transformation prompt such as:

Translate the user message to English. Preserve meaning and time references. Return only translated text.

Example transformed request payload:

{
"message": "Please cancel my appointment for next Wednesday and reschedule it to Friday morning."
}

Performance Considerations

AI transformation introduces an additional LLM round-trip in the request path, so plan for:

  • Latency: Every transformation adds model inference time. Use this only where semantic transformation is required.
  • Cost: Transformation prompts and outputs consume extra tokens on the transformation provider.
  • Scope control: Apply rewriting only on routes that need it, not globally.
  • Prompt discipline: Keep prompts explicit and output-constrained to reduce retries and drift.
  • Caching opportunities: If inputs are repetitive, cache transformed outputs to reduce both latency and token spend.

Use AI request transformation when traditional deterministic transforms (mapping, regex, fixed templates) are insufficient.

Next Steps

API7.ai Logo

The digital world is connected by APIs,
API7.ai exists to make APIs more efficient, reliable, and secure.

Sign up for API7 newsletter

Product

API7 Gateway

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN PTE. LTD 2019 – 2026. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the Apache Software Foundation