Parameters
See plugin common configurations for configuration options available to all plugins.
provider
string
required
vaild vaule:
openai
,deepseek
, oropenai-compatible
LLM service provider. When set to
openai
, the plugin will proxy the request tohttps://api.openai.com/chat/completions
. When set todeepseek
, the plugin will proxy the request tohttps://api.deepseek.com/chat/completions
. When set toopenai-compatible
, the plugin will proxy the request to the custom endpoint configured inoverride
.auth
object
required
Authentication configurations.
header
object
Authentication headers. At least one of the
header
andquery
should be configured.query
object
Authentication query parameters. At least one of the
header
andquery
should be configured.
options
object
Model configurations.
In addition to
model
, you can configure additional parameters and they will be forwarded to the upstream LLM service in the request body. For instance, if you are working with OpenAI, you can configure additional parameters such astemperature
,top_p
, andstream
. See your LLM provider's API documentation for more available options.model
string
Name of the LLM model, such as
gpt-4
orgpt-3.5
. See your LLM provider's API documentation for more available models.
override
object
Override setting.
endpoint
string
LLM provider endpoint. Required when
provider
isopenai-compatible
.
timeout
integer
default:
30000
vaild vaule:
greater than or equal to 1
Request timeout in milliseconds when requesting the LLM service.
keepalive
boolean
default:
true
If true, keep the conneciton alive when requesting the LLM service.
keepalive_timeout
integer
default:
60000
vaild vaule:
greater than or equal to 1000
Request timeout in milliseconds when requesting the LLM service.
keepalive_pool
integer
default:
30
Keepalive pool size for when connecting with the LLM service.
ssl_verify
boolean
default:
true
If true, verify the LLM service's certificate.