Skip to main content

Version: latest

Convert Anthropic Messages to OpenAI Chat Completions

This guide explains how API7 AI Gateway converts Anthropic Messages API requests to the OpenAI Chat Completions API format. This enables teams to use the Anthropic SDK while routing to any OpenAI-compatible backend provider — without changing application code.

Overview

Teams often adopt different LLM SDKs for different services. Switching providers typically requires rewriting API integration code. API7 AI Gateway solves this by converting between protocols at the gateway layer:

  • Anthropic SDK → OpenAI backend: Send Anthropic-format requests; the gateway converts and forwards to any OpenAI-compatible provider.
  • Transparent response conversion: Responses from the OpenAI backend are automatically converted back to Anthropic format.

Protocol conversion is triggered automatically — when the gateway detects a request to /v1/messages, it recognizes the Anthropic protocol and handles conversion transparently.

note

Auto-detection is based on the request URI /v1/messages. This means the gateway converts Anthropic-format requests to OpenAI-format for forwarding. The reverse direction (sending OpenAI-format requests to an Anthropic backend) is not auto-detected.

Prerequisites

  • Install Docker.
  • Install cURL to send requests to the services for validation.
  • Have a running API7 Enterprise Gateway instance. See the Getting Started Guide for setup instructions.

How Protocol Conversion Works

The conversion pipeline:

  1. Auto-detection: When the request URI is /v1/messages, the gateway identifies it as Anthropic protocol.
  2. Request conversion: Anthropic fields are mapped to OpenAI equivalents (system prompts, messages, tool definitions, parameters).
  3. Response conversion: OpenAI response fields are mapped back to Anthropic format (content blocks, stop reasons, usage).
  4. Streaming support: OpenAI SSE stream chunks are converted to Anthropic SSE events (message_start, content_block_delta, message_delta, message_stop).

Supported Conversions

AnthropicDirectionOpenAI
system (top-level field)messages[0].role: "system"
max_tokensmax_completion_tokens
stop_sequencesstop
stop_reason: "end_turn"finish_reason: "stop"
stop_reason: "max_tokens"finish_reason: "length"
stop_reason: "tool_use"finish_reason: "tool_calls"
input_tokens / output_tokensprompt_tokens / completion_tokens
tool_use / tool_result blocksfunction / tool calls

Configure Protocol Conversion

Route Anthropic-format requests to an OpenAI backend. No explicit protocol configuration is needed — the gateway auto-detects the protocol from the URI.

curl "http://127.0.0.1:7080/apisix/admin/routes?gateway_group_id=default" -X PUT \
-H "X-API-KEY: $ADMIN_API_KEY" \
-d '{
"id": "protocol-conversion",
"service_id": "$SERVICE_ID",
"paths": ["/v1/messages"],
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4o"
}
}
}
}'

❶ Set the URI to /v1/messages — the Anthropic Messages API endpoint. The gateway auto-detects this as Anthropic protocol.

❷ Set the backend provider to openai. The gateway converts Anthropic requests to OpenAI format before forwarding.

❸ Set the backend model. Clients send Anthropic-format requests, but the actual inference runs on this OpenAI model.

Example: Use Anthropic SDK with OpenAI Backend

Send an Anthropic-format request to the route:

curl "http://127.0.0.1:9080/v1/messages" -X POST \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "gpt-4o",
"max_tokens": 1024,
"system": "You are a helpful assistant.",
"messages": [
{ "role": "user", "content": "What is an API gateway?" }
]
}'

You should receive a response in Anthropic format:

{
"id": "msg_abc123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "An API gateway is a server that acts as a single entry point for API requests, handling routing, authentication, rate limiting, and other cross-cutting concerns for backend services."
}
],
"model": "gpt-4o",
"stop_reason": "end_turn",
"usage": {
"input_tokens": 18,
"output_tokens": 35
}
}

The response follows the Anthropic format (type: "message", content blocks, stop_reason, input_tokens/output_tokens) even though the actual inference ran on OpenAI's GPT-4o.

Example: Tool Calling Across Protocols

Send an Anthropic-format request with tool definitions:

curl "http://127.0.0.1:9080/v1/messages" -X POST \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "gpt-4o",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
],
"messages": [
{ "role": "user", "content": "What is the weather in San Francisco?" }
]
}'

The gateway converts Anthropic tool definitions to OpenAI function format, forwards to OpenAI, then converts the response back to Anthropic tool_use blocks.

Limitations

  • Auto-detection relies on the exact URI /v1/messages. Other URI patterns are not recognized as Anthropic protocol.
  • Protocol conversion is between Anthropic and OpenAI only. Other protocol pairs are not currently supported.

Next Steps

API7.ai Logo

The digital world is connected by APIs,
API7.ai exists to make APIs more efficient, reliable, and secure.

Sign up for API7 newsletter

Product

API7 Gateway

SOC2 Type IIISO 27001HIPAAGDPRRed Herring

Copyright © APISEVEN PTE. LTD 2019 – 2026. Apache, Apache APISIX, APISIX, and associated open source project names are trademarks of the Apache Software Foundation