Migrate from OpenRouter
Switch to LLM Gateway for built-in analytics, self-hosting options, and simpler API. Two-line code change.
LLM Gateway works just like OpenRouter—same API format, same model names—but with built-in analytics and the option to self-host. Migration takes two lines of code.
Quick Migration
Change your base URL and API key:
1- const baseURL = "https://openrouter.ai/api/v1";2- const apiKey = process.env.OPENROUTER_API_KEY;3+ const baseURL = "https://api.llmgateway.io/v1";4+ const apiKey = process.env.LLM_GATEWAY_API_KEY;
1- const baseURL = "https://openrouter.ai/api/v1";2- const apiKey = process.env.OPENROUTER_API_KEY;3+ const baseURL = "https://api.llmgateway.io/v1";4+ const apiKey = process.env.LLM_GATEWAY_API_KEY;
Migration Steps
1. Get Your LLM Gateway API Key
Sign up at llmgateway.io/signup and create an API key from your dashboard.
2. Update Environment Variables
1# Remove OpenRouter credentials2# OPENROUTER_API_KEY=sk-or-...34# Add LLM Gateway credentials5LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
1# Remove OpenRouter credentials2# OPENROUTER_API_KEY=sk-or-...34# Add LLM Gateway credentials5LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
3. Update Your Code
Using fetch/axios
1// Before (OpenRouter)2const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {3 method: "POST",4 headers: {5 Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,6 "Content-Type": "application/json",7 },8 body: JSON.stringify({9 model: "openai/gpt-5.2",10 messages: [{ role: "user", content: "Hello!" }],11 }),12});1314// After (LLM Gateway)15const response = await fetch("https://api.llmgateway.io/v1/chat/completions", {16 method: "POST",17 headers: {18 Authorization: `Bearer ${process.env.LLM_GATEWAY_API_KEY}`,19 "Content-Type": "application/json",20 },21 body: JSON.stringify({22 model: "gpt-5.2",23 messages: [{ role: "user", content: "Hello!" }],24 }),25});
1// Before (OpenRouter)2const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {3 method: "POST",4 headers: {5 Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,6 "Content-Type": "application/json",7 },8 body: JSON.stringify({9 model: "openai/gpt-5.2",10 messages: [{ role: "user", content: "Hello!" }],11 }),12});1314// After (LLM Gateway)15const response = await fetch("https://api.llmgateway.io/v1/chat/completions", {16 method: "POST",17 headers: {18 Authorization: `Bearer ${process.env.LLM_GATEWAY_API_KEY}`,19 "Content-Type": "application/json",20 },21 body: JSON.stringify({22 model: "gpt-5.2",23 messages: [{ role: "user", content: "Hello!" }],24 }),25});
Using OpenAI SDK
1import OpenAI from "openai";23// Before (OpenRouter)4const client = new OpenAI({5 baseURL: "https://openrouter.ai/api/v1",6 apiKey: process.env.OPENROUTER_API_KEY,7});89// After (LLM Gateway)10const client = new OpenAI({11 baseURL: "https://api.llmgateway.io/v1",12 apiKey: process.env.LLM_GATEWAY_API_KEY,13});1415// Usage remains the same16const completion = await client.chat.completions.create({17 model: "anthropic/claude-3-5-sonnet-20241022",18 messages: [{ role: "user", content: "Hello!" }],19});
1import OpenAI from "openai";23// Before (OpenRouter)4const client = new OpenAI({5 baseURL: "https://openrouter.ai/api/v1",6 apiKey: process.env.OPENROUTER_API_KEY,7});89// After (LLM Gateway)10const client = new OpenAI({11 baseURL: "https://api.llmgateway.io/v1",12 apiKey: process.env.LLM_GATEWAY_API_KEY,13});1415// Usage remains the same16const completion = await client.chat.completions.create({17 model: "anthropic/claude-3-5-sonnet-20241022",18 messages: [{ role: "user", content: "Hello!" }],19});
Using Vercel AI SDK
Both OpenRouter and LLM Gateway have native AI SDK providers, making migration straightforward:
1import { generateText } from "ai";23// Before (OpenRouter AI SDK Provider)4import { createOpenRouter } from "@openrouter/ai-sdk-provider";56const openrouter = createOpenRouter({7 apiKey: process.env.OPENROUTER_API_KEY,8});910const { text } = await generateText({11 model: openrouter("gpt-5.2"),12 prompt: "Hello!",13});1415// After (LLM Gateway AI SDK Provider)16import { createLLMGateway } from "@llmgateway/ai-sdk-provider";1718const llmgateway = createLLMGateway({19 apiKey: process.env.LLMGATEWAY_API_KEY,20});2122const { text } = await generateText({23 model: llmgateway("gpt-5.2"),24 prompt: "Hello!",25});
1import { generateText } from "ai";23// Before (OpenRouter AI SDK Provider)4import { createOpenRouter } from "@openrouter/ai-sdk-provider";56const openrouter = createOpenRouter({7 apiKey: process.env.OPENROUTER_API_KEY,8});910const { text } = await generateText({11 model: openrouter("gpt-5.2"),12 prompt: "Hello!",13});1415// After (LLM Gateway AI SDK Provider)16import { createLLMGateway } from "@llmgateway/ai-sdk-provider";1718const llmgateway = createLLMGateway({19 apiKey: process.env.LLMGATEWAY_API_KEY,20});2122const { text } = await generateText({23 model: llmgateway("gpt-5.2"),24 prompt: "Hello!",25});
Model Name Mapping
Most model names are compatible, but here are some common mappings:
| OpenRouter Model | LLM Gateway Model |
|---|---|
| openai/gpt-5.2 | gpt-5.2 or openai/gpt-5.2 |
| gemini/gemini-3-flash-preview | gemini-3-flash-preview or google-ai-studio/gemini-3-flash-preview |
| bedrock/claude-opus-4-5-20251101 | claude-opus-4-5-20251101 or aws-bedrock/claude-opus-4-5-20251101 |
Check the models page for the full list of available models.
Streaming Support
LLM Gateway supports streaming responses identically to OpenRouter:
1const stream = await client.chat.completions.create({2 model: "anthropic/claude-3-5-sonnet-20241022",3 messages: [{ role: "user", content: "Write a story" }],4 stream: true,5});67for await (const chunk of stream) {8 process.stdout.write(chunk.choices[0]?.delta?.content || "");9}
1const stream = await client.chat.completions.create({2 model: "anthropic/claude-3-5-sonnet-20241022",3 messages: [{ role: "user", content: "Write a story" }],4 stream: true,5});67for await (const chunk of stream) {8 process.stdout.write(chunk.choices[0]?.delta?.content || "");9}
Full Comparison
Want to see a detailed breakdown of all features? Check out our LLM Gateway vs OpenRouter comparison page.
Need Help?
- Browse available models at llmgateway.io/models
- Read the API documentation
- Contact support at [email protected]