Migrate from Vercel AI Gateway
Keep your Vercel AI SDK code, add response caching, detailed analytics, and smart routing. One provider for all models.
Quick Migration
Swap your provider imports—your AI SDK code stays the same:
1- import { openai } from "@ai-sdk/openai";2- import { anthropic } from "@ai-sdk/anthropic";3+ import { generateText } from "ai";4+ import { createLLMGateway } from "@llmgateway/ai-sdk-provider";56+ const llmgateway = createLLMGateway({7+ apiKey: process.env.LLM_GATEWAY_API_KEY8+ });910const { text } = await generateText({11- model: openai("gpt-5.2"),12+ model: llmgateway("gpt-5.2"),13 prompt: "Hello!"14});
1- import { openai } from "@ai-sdk/openai";2- import { anthropic } from "@ai-sdk/anthropic";3+ import { generateText } from "ai";4+ import { createLLMGateway } from "@llmgateway/ai-sdk-provider";56+ const llmgateway = createLLMGateway({7+ apiKey: process.env.LLM_GATEWAY_API_KEY8+ });910const { text } = await generateText({11- model: openai("gpt-5.2"),12+ model: llmgateway("gpt-5.2"),13 prompt: "Hello!"14});
The key difference: one provider, one API key, all models—with caching and analytics built in.
Migration Steps
1. Get Your LLM Gateway API Key
Sign up at llmgateway.io/signup and create an API key from your dashboard.
2. Install the LLM Gateway AI SDK Provider
Install the native LLM Gateway provider for the Vercel AI SDK:
1pnpm add @llmgateway/ai-sdk-provider
1pnpm add @llmgateway/ai-sdk-provider
This package provides full compatibility with the Vercel AI SDK and supports all LLM Gateway features.
3. Update Your Code
Basic Text Generation
1// Before (Vercel AI Gateway with native providers)2import { openai } from "@ai-sdk/openai";3import { anthropic } from "@ai-sdk/anthropic";4import { generateText } from "ai";56const { text: openaiText } = await generateText({7 model: openai("gpt-4o"),8 prompt: "Hello!",9});1011const { text: claudeText } = await generateText({12 model: anthropic("claude-3-5-sonnet-20241022"),13 prompt: "Hello!",14});1516// After (LLM Gateway - single provider for all models)17import { createLLMGateway } from "@llmgateway/ai-sdk-provider";18import { generateText } from "ai";1920const llmgateway = createLLMGateway({21 apiKey: process.env.LLM_GATEWAY_API_KEY,22});2324const { text: openaiText } = await generateText({25 model: llmgateway("openai/gpt-4o"),26 prompt: "Hello!",27});2829const { text: claudeText } = await generateText({30 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),31 prompt: "Hello!",32});
1// Before (Vercel AI Gateway with native providers)2import { openai } from "@ai-sdk/openai";3import { anthropic } from "@ai-sdk/anthropic";4import { generateText } from "ai";56const { text: openaiText } = await generateText({7 model: openai("gpt-4o"),8 prompt: "Hello!",9});1011const { text: claudeText } = await generateText({12 model: anthropic("claude-3-5-sonnet-20241022"),13 prompt: "Hello!",14});1516// After (LLM Gateway - single provider for all models)17import { createLLMGateway } from "@llmgateway/ai-sdk-provider";18import { generateText } from "ai";1920const llmgateway = createLLMGateway({21 apiKey: process.env.LLM_GATEWAY_API_KEY,22});2324const { text: openaiText } = await generateText({25 model: llmgateway("openai/gpt-4o"),26 prompt: "Hello!",27});2829const { text: claudeText } = await generateText({30 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),31 prompt: "Hello!",32});
Streaming Responses
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { streamText } from "ai";34const llmgateway = createLLMGateway({5 apiKey: process.env.LLM_GATEWAY_API_KEY,6});78const { textStream } = await streamText({9 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),10 prompt: "Write a poem about coding",11});1213for await (const text of textStream) {14 process.stdout.write(text);15}
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { streamText } from "ai";34const llmgateway = createLLMGateway({5 apiKey: process.env.LLM_GATEWAY_API_KEY,6});78const { textStream } = await streamText({9 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),10 prompt: "Write a poem about coding",11});1213for await (const text of textStream) {14 process.stdout.write(text);15}
Using in Next.js API Routes
1// app/api/chat/route.ts2import { createLLMGateway } from "@llmgateway/ai-sdk-provider";3import { streamText } from "ai";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89export async function POST(req: Request) {10 const { messages } = await req.json();1112 const result = await streamText({13 model: llmgateway("openai/gpt-4o"),14 messages,15 });1617 return result.toDataStreamResponse();18}
1// app/api/chat/route.ts2import { createLLMGateway } from "@llmgateway/ai-sdk-provider";3import { streamText } from "ai";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89export async function POST(req: Request) {10 const { messages } = await req.json();1112 const result = await streamText({13 model: llmgateway("openai/gpt-4o"),14 messages,15 });1617 return result.toDataStreamResponse();18}
Alternative: Using OpenAI SDK Adapter
If you prefer not to install a new package, you can use @ai-sdk/openai with a custom base URL:
1import { createOpenAI } from "@ai-sdk/openai";2import { generateText } from "ai";34const llmgateway = createOpenAI({5 baseURL: "https://api.llmgateway.io/v1",6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 prompt: "Hello!",12});
1import { createOpenAI } from "@ai-sdk/openai";2import { generateText } from "ai";34const llmgateway = createOpenAI({5 baseURL: "https://api.llmgateway.io/v1",6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 prompt: "Hello!",12});
4. Update Environment Variables
1# Remove individual provider keys (optional - can keep as backup)2# OPENAI_API_KEY=sk-...3# ANTHROPIC_API_KEY=sk-ant-...45# Add LLM Gateway key6export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
1# Remove individual provider keys (optional - can keep as backup)2# OPENAI_API_KEY=sk-...3# ANTHROPIC_API_KEY=sk-ant-...45# Add LLM Gateway key6export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
Model Name Format
LLM Gateway supports two model ID formats:
Root Model IDs (without provider prefix) - Uses smart routing to automatically select the best provider based on uptime, throughput, price, and latency:
1gpt-4o2claude-3-5-sonnet-202410223gemini-1.5-pro
1gpt-4o2claude-3-5-sonnet-202410223gemini-1.5-pro
Provider-Prefixed Model IDs - Routes to a specific provider with automatic failover if uptime drops below 90%:
1openai/gpt-4o2anthropic/claude-3-5-sonnet-202410223google-ai-studio/gemini-1.5-pro
1openai/gpt-4o2anthropic/claude-3-5-sonnet-202410223google-ai-studio/gemini-1.5-pro
For more details on routing behavior, see the routing documentation.
Model Mapping Examples
| Vercel AI SDK | LLM Gateway |
|---|---|
openai("gpt-4o") | llmgateway("gpt-4o") or llmgateway("openai/gpt-4o") |
anthropic("claude-3-5-sonnet-20241022") | llmgateway("claude-3-5-sonnet-20241022") or llmgateway("anthropic/claude-3-5-sonnet-20241022") |
google("gemini-1.5-pro") | llmgateway("gemini-1.5-pro") or llmgateway("google-ai-studio/gemini-1.5-pro") |
Check the models page for the full list of available models.
Tool Calling
LLM Gateway supports tool calling through the AI SDK:
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { generateText, tool } from "ai";3import { z } from "zod";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text, toolResults } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 tools: {12 weather: tool({13 description: "Get the weather for a location",14 parameters: z.object({15 location: z.string(),16 }),17 execute: async ({ location }) => {18 return { temperature: 72, condition: "sunny" };19 },20 }),21 },22 prompt: "What's the weather in San Francisco?",23});
1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";2import { generateText, tool } from "ai";3import { z } from "zod";45const llmgateway = createLLMGateway({6 apiKey: process.env.LLM_GATEWAY_API_KEY,7});89const { text, toolResults } = await generateText({10 model: llmgateway("openai/gpt-4o"),11 tools: {12 weather: tool({13 description: "Get the weather for a location",14 parameters: z.object({15 location: z.string(),16 }),17 execute: async ({ location }) => {18 return { temperature: 72, condition: "sunny" };19 },20 }),21 },22 prompt: "What's the weather in San Francisco?",23});
Self-Hosting LLM Gateway
If you prefer self-hosting, LLM Gateway is available under AGPLv3:
1git clone https://github.com/llmgateway/llmgateway2cd llmgateway3pnpm install4pnpm setup5pnpm dev
1git clone https://github.com/llmgateway/llmgateway2cd llmgateway3pnpm install4pnpm setup5pnpm dev
This gives you the same managed experience with full control over your infrastructure.
Need Help?
- Browse available models at llmgateway.io/models
- Read the API documentation
- Contact support at [email protected]