LLM (OpenAI)
Configure the agentgateway binary to route requests to the OpenAI chat completions API.
Before you begin
-
Install the agentgateway binary.
curl -sL https://agentgateway.dev/install | bash -
Get an OpenAI API key.
Steps
Route to an OpenAI backend through agentgateway.
Step 1: Set your API key
Store your OpenAI API key in an environment variable so agentgateway can authenticate to the API.
export OPENAI_API_KEY="${OPENAI_API_KEY:-<your-api-key>}"Step 2: Create the configuration
Create a config.yaml that defines an HTTP listener and an AI backend for OpenAI. This configuration listens on port 3000, routes traffic to the OpenAI backend, and attaches your API key to outgoing requests via the backendAuth policy.
cat > config.yaml << 'EOF'
# yaml-language-server: $schema=https://agentgateway.dev/schema/config
binds:
- port: 3000
listeners:
- protocol: HTTP
routes:
- backends:
- ai:
name: openai
provider:
openAI:
model: gpt-3.5-turbo
policies:
backendAuth:
key: "$OPENAI_API_KEY"
EOFStep 3: Start agentgateway
Run agentgateway with the config file.
agentgateway -f config.yamlExample output:
info state_manager loaded config from File("config.yaml")
info app serving UI at http://localhost:15000/ui
info proxy::gateway started bind bind="bind/3000"Step 4: Send a chat completion request
From another terminal, send a request to the chat completions endpoint.
curl -s http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say hello in one sentence."}]
}' | jq .Example output (abbreviated):
{
"choices": [
{
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
}
}
]
}Next steps
Check out more guides related to LLM consumption with agentgateway.