From LiteLLM

Migrate from LiteLLM

Switch from self-hosted LiteLLM to managed LLM Gateway. Same API format, zero infrastructure to maintain.

Running your own LiteLLM proxy works—until it doesn't. Scaling, monitoring, and keeping it running becomes another job. LLM Gateway gives you the same unified API with built-in analytics, caching, and a dashboard—without the infrastructure overhead.

Quick Migration

Both services use OpenAI-compatible endpoints, so migration is a two-line change:

1- const baseURL = "http://localhost:4000/v1"; // LiteLLM proxy
2+ const baseURL = "https://api.llmgateway.io/v1";
3
4- const apiKey = process.env.LITELLM_API_KEY;
5+ const apiKey = process.env.LLM_GATEWAY_API_KEY;

Why Teams Switch to LLM Gateway

What You Get LiteLLM (Self-Hosted) LLM Gateway
OpenAI-compatible API Yes Yes
Infrastructure to manage Yes (you run it) No (we run it)
Managed cloud option No Yes
Analytics dashboard Basic Per-request detail
Response caching Manual setup Built-in, automatic
Cost tracking Via callbacks Native, real-time
Provider key management Config file Web UI with rotation
Uptime & scaling You handle it 99.9% SLA (Pro/Ent)

Still want to self-host? LLM Gateway is open source under AGPLv3—same features, your infrastructure.

For a detailed breakdown, see LLM Gateway vs LiteLLM.

Migration Steps

1. Get Your LLM Gateway API Key

Sign up at llmgateway.io/signup and create an API key from your dashboard.

2. Map Your Models

LLM Gateway supports two model ID formats:

Root Model IDs (without provider prefix) - Uses smart routing to automatically select the best provider based on uptime, throughput, price, and latency:

1gpt-5.2
2claude-opus-4-5-20251101
3gemini-3-flash-preview

Provider-Prefixed Model IDs - Routes to a specific provider with automatic failover if uptime drops below 90%:

1openai/gpt-5.2
2anthropic/claude-opus-4-5-20251101
3google-ai-studio/gemini-3-flash-preview

This means many LiteLLM model names work directly with LLM Gateway:

LiteLLM Model LLM Gateway Model
gpt-5.2 gpt-5.2 or openai/gpt-5.2
claude-opus-4-5-20251101 claude-opus-4-5-20251101 or anthropic/claude-opus-4-5-20251101
gemini/gemini-3-flash-preview gemini-3-flash-preview or google-ai-studio/gemini-3-flash-preview
bedrock/claude-opus-4-5-20251101 claude-opus-4-5-20251101 or aws-bedrock/claude-opus-4-5-20251101

For more details on routing behavior, see the routing documentation.

3. Update Your Code

Python with OpenAI SDK

1from openai import OpenAI
2
3# Before (LiteLLM proxy)
4client = OpenAI(
5 base_url="http://localhost:4000/v1",
6 api_key=os.environ["LITELLM_API_KEY"]
7)
8
9response = client.chat.completions.create(
10 model="gpt-4",
11 messages=[{"role": "user", "content": "Hello!"}]
12)
13
14# After (LLM Gateway) - model name can stay the same!
15client = OpenAI(
16 base_url="https://api.llmgateway.io/v1",
17 api_key=os.environ["LLM_GATEWAY_API_KEY"]
18)
19
20response = client.chat.completions.create(
21 model="gpt-4", # or "openai/gpt-4" to target a specific provider
22 messages=[{"role": "user", "content": "Hello!"}]
23)

Python with LiteLLM Library

If you're using the LiteLLM library directly, you can point it to LLM Gateway:

1import litellm
2
3# Before (direct LiteLLM)
4response = litellm.completion(
5 model="gpt-4",
6 messages=[{"role": "user", "content": "Hello!"}]
7)
8
9# After (via LLM Gateway) - same model name works
10response = litellm.completion(
11 model="gpt-4", # or "openai/gpt-4" to target a specific provider
12 messages=[{"role": "user", "content": "Hello!"}],
13 api_base="https://api.llmgateway.io/v1",
14 api_key=os.environ["LLM_GATEWAY_API_KEY"]
15)

TypeScript/JavaScript

1import OpenAI from "openai";
2
3// Before (LiteLLM proxy)
4const client = new OpenAI({
5 baseURL: "http://localhost:4000/v1",
6 apiKey: process.env.LITELLM_API_KEY,
7});
8
9// After (LLM Gateway) - same model name works
10const client = new OpenAI({
11 baseURL: "https://api.llmgateway.io/v1",
12 apiKey: process.env.LLM_GATEWAY_API_KEY,
13});
14
15const completion = await client.chat.completions.create({
16 model: "gpt-4", // or "openai/gpt-4" to target a specific provider
17 messages: [{ role: "user", content: "Hello!" }],
18});

cURL

1# Before (LiteLLM proxy)
2curl http://localhost:4000/v1/chat/completions \
3 -H "Authorization: Bearer $LITELLM_API_KEY" \
4 -H "Content-Type: application/json" \
5 -d '{
6 "model": "gpt-4",
7 "messages": [{"role": "user", "content": "Hello!"}]
8 }'
9
10# After (LLM Gateway) - same model name works
11curl https://api.llmgateway.io/v1/chat/completions \
12 -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
13 -H "Content-Type: application/json" \
14 -d '{
15 "model": "gpt-4",
16 "messages": [{"role": "user", "content": "Hello!"}]
17 }'
18# Use "openai/gpt-4" to target a specific provider

4. Migrate Configuration

LiteLLM Config (Before)

1# litellm_config.yaml
2model_list:
3 - model_name: gpt-4
4 litellm_params:
5 model: gpt-4
6 api_key: sk-...
7 - model_name: claude-3
8 litellm_params:
9 model: claude-3-sonnet-20240229
10 api_key: sk-ant-...

LLM Gateway (After)

With LLM Gateway, you don't need a config file. Provider keys are managed in the web dashboard, or you can use the default LLM Gateway keys.

If you want to use your own provider keys, configure them in the dashboard under Settings > Provider Keys.

Streaming Support

LLM Gateway supports streaming identically to LiteLLM:

1from openai import OpenAI
2
3client = OpenAI(
4 base_url="https://api.llmgateway.io/v1",
5 api_key=os.environ["LLM_GATEWAY_API_KEY"]
6)
7
8stream = client.chat.completions.create(
9 model="openai/gpt-4",
10 messages=[{"role": "user", "content": "Write a story"}],
11 stream=True
12)
13
14for chunk in stream:
15 if chunk.choices[0].delta.content:
16 print(chunk.choices[0].delta.content, end="")

Function/Tool Calling

LLM Gateway supports function calling:

1from openai import OpenAI
2
3client = OpenAI(
4 base_url="https://api.llmgateway.io/v1",
5 api_key=os.environ["LLM_GATEWAY_API_KEY"]
6)
7
8tools = [{
9 "type": "function",
10 "function": {
11 "name": "get_weather",
12 "description": "Get the weather for a location",
13 "parameters": {
14 "type": "object",
15 "properties": {
16 "location": {"type": "string"}
17 },
18 "required": ["location"]
19 }
20 }
21}]
22
23response = client.chat.completions.create(
24 model="openai/gpt-4",
25 messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
26 tools=tools
27)

Removing LiteLLM Infrastructure

After verifying LLM Gateway works for your use case, you can decommission your LiteLLM proxy:

  1. Update all clients to use LLM Gateway endpoints
  2. Monitor the LLM Gateway dashboard for successful requests
  3. Shut down your LiteLLM proxy server
  4. Remove LiteLLM configuration files

What Changes After Migration

  • No servers to babysit — We handle scaling, uptime, and updates
  • Real-time cost visibility — See what every request costs, broken down by model
  • Automatic caching — Repeated requests hit cache, reducing your spend
  • Web-based management — No more editing YAML files for config changes
  • New models immediately — Access new releases within 48 hours, no deployment needed

Self-Hosting LLM Gateway

If you prefer self-hosting like LiteLLM, LLM Gateway is available under AGPLv3:

1git clone https://github.com/llmgateway/llmgateway
2cd llmgateway
3pnpm install
4pnpm setup
5pnpm dev

This gives you the same benefits as LiteLLM's self-hosted proxy with LLM Gateway's analytics and caching features.

Full Comparison

Want to see a detailed breakdown of all features? Check out our LLM Gateway vs LiteLLM comparison page.

Need Help?