Claude Code Integration
Use GPT-5, Gemini, or any model with Claude Code. Three environment variables, full cost tracking.
Claude Code is locked to Anthropic's API by default. With LLM Gateway, you can point it at any model—GPT-5, Gemini, Llama, or 180+ others—while keeping the same Anthropic API format Claude Code expects.
Three environment variables. No code changes. Full cost tracking in your dashboard.
Video Tutorial
Set up Claude Code with LLM Gateway in under 2 minutes:
Quick Start
Configure Claude Code to use LLM Gateway with these environment variables:
1export ANTHROPIC_BASE_URL=https://api.llmgateway.io2export ANTHROPIC_AUTH_TOKEN=llmgtwy_your_api_key_here3# optional: specify a model, otherwise it uses the default Claude model4export ANTHROPIC_MODEL=gpt-5 # or any model from our catalog56# now run claude!7claude
1export ANTHROPIC_BASE_URL=https://api.llmgateway.io2export ANTHROPIC_AUTH_TOKEN=llmgtwy_your_api_key_here3# optional: specify a model, otherwise it uses the default Claude model4export ANTHROPIC_MODEL=gpt-5 # or any model from our catalog56# now run claude!7claude
Why This Works
LLM Gateway's /v1/messages endpoint speaks Anthropic's API format natively. We handle the translation to each provider behind the scenes. This means:
- Use any model — GPT-5, Gemini, Llama, or Claude itself
- Keep your workflow — Claude Code doesn't know the difference
- Track costs — Every request appears in your LLM Gateway dashboard
- Automatic caching — Repeated requests hit cache, saving money
Choosing Models
You can use any model from the models page. Popular options for Claude Code include:
Use OpenAI's Latest Models
1# Use the latest GPT model2export ANTHROPIC_MODEL=gpt-534# Use a cost-effective alternative5export ANTHROPIC_MODEL=gpt-5-mini
1# Use the latest GPT model2export ANTHROPIC_MODEL=gpt-534# Use a cost-effective alternative5export ANTHROPIC_MODEL=gpt-5-mini
Use Google's Gemini
1export ANTHROPIC_MODEL=google/gemini-2.5-pro
1export ANTHROPIC_MODEL=google/gemini-2.5-pro
Use Anthropic's Claude Models
1export ANTHROPIC_MODEL=anthropic/claude-3-5-sonnet-20241022
1export ANTHROPIC_MODEL=anthropic/claude-3-5-sonnet-20241022
Environment Variables
When configuring Claude Code, you can use these environment variables:
ANTHROPIC_MODEL
Specifies the main model to use for primary requests.
1export ANTHROPIC_MODEL=gpt-5
1export ANTHROPIC_MODEL=gpt-5
Complete Configuration Example
1export ANTHROPIC_BASE_URL=https://api.llmgateway.io2export ANTHROPIC_AUTH_TOKEN=llmgtwy_your_api_key_here3export ANTHROPIC_MODEL=gpt-54export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-nano
1export ANTHROPIC_BASE_URL=https://api.llmgateway.io2export ANTHROPIC_AUTH_TOKEN=llmgtwy_your_api_key_here3export ANTHROPIC_MODEL=gpt-54export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-nano
Making Manual API Requests
If you want to test the endpoint directly, you can make manual requests:
1curl -X POST "https://api.llmgateway.io/v1/messages" \2 -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \3 -H "Content-Type: application/json" \4 -d '{5 "model": "gpt-5",6 "messages": [7 {"role": "user", "content": "Hello, how are you?"}8 ],9 "max_tokens": 10010 }'
1curl -X POST "https://api.llmgateway.io/v1/messages" \2 -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \3 -H "Content-Type: application/json" \4 -d '{5 "model": "gpt-5",6 "messages": [7 {"role": "user", "content": "Hello, how are you?"}8 ],9 "max_tokens": 10010 }'
Response Format
The endpoint returns responses in Anthropic's message format:
1{2 "id": "msg_abc123",3 "type": "message",4 "role": "assistant",5 "model": "gpt-5",6 "content": [7 {8 "type": "text",9 "text": "Hello! I'm doing well, thank you for asking. How can I help you today?"10 }11 ],12 "stop_reason": "end_turn",13 "stop_sequence": null,14 "usage": {15 "input_tokens": 13,16 "output_tokens": 2017 }18}
1{2 "id": "msg_abc123",3 "type": "message",4 "role": "assistant",5 "model": "gpt-5",6 "content": [7 {8 "type": "text",9 "text": "Hello! I'm doing well, thank you for asking. How can I help you today?"10 }11 ],12 "stop_reason": "end_turn",13 "stop_sequence": null,14 "usage": {15 "input_tokens": 13,16 "output_tokens": 2017 }18}
What You Get
- Any model in Claude Code — GPT-5 for heavy lifting, GPT-4o Mini for routine tasks
- Cost visibility — See exactly what each coding session costs
- One bill — Stop managing separate accounts for OpenAI, Anthropic, Google
- Response caching — Repeated requests (like linting the same file) hit cache
- Discounts — Check discounted models for savings up to 90%
Get Started
- Sign up free — no credit card required
- Copy your API key from the dashboard
- Set the three environment variables above
- Run
claudeand start coding
Questions? Check our docs or join Discord.