From Vercel AI Gateway

Migrate from Vercel AI Gateway

Keep your Vercel AI SDK code, add response caching, detailed analytics, and smart routing. One provider for all models.

Quick Migration

Swap your provider imports—your AI SDK code stays the same:

1- import { openai } from "@ai-sdk/openai";
2- import { anthropic } from "@ai-sdk/anthropic";
3+ import { generateText } from "ai";
4+ import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
5
6+ const llmgateway = createLLMGateway({
7+ apiKey: process.env.LLM_GATEWAY_API_KEY
8+ });
9
10const { text } = await generateText({
11- model: openai("gpt-5.2"),
12+ model: llmgateway("gpt-5.2"),
13 prompt: "Hello!"
14});

The key difference: one provider, one API key, all models—with caching and analytics built in.

Migration Steps

1. Get Your LLM Gateway API Key

Sign up at llmgateway.io/signup and create an API key from your dashboard.

2. Install the LLM Gateway AI SDK Provider

Install the native LLM Gateway provider for the Vercel AI SDK:

1pnpm add @llmgateway/ai-sdk-provider

This package provides full compatibility with the Vercel AI SDK and supports all LLM Gateway features.

3. Update Your Code

Basic Text Generation

1// Before (Vercel AI Gateway with native providers)
2import { openai } from "@ai-sdk/openai";
3import { anthropic } from "@ai-sdk/anthropic";
4import { generateText } from "ai";
5
6const { text: openaiText } = await generateText({
7 model: openai("gpt-4o"),
8 prompt: "Hello!",
9});
10
11const { text: claudeText } = await generateText({
12 model: anthropic("claude-3-5-sonnet-20241022"),
13 prompt: "Hello!",
14});
15
16// After (LLM Gateway - single provider for all models)
17import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
18import { generateText } from "ai";
19
20const llmgateway = createLLMGateway({
21 apiKey: process.env.LLM_GATEWAY_API_KEY,
22});
23
24const { text: openaiText } = await generateText({
25 model: llmgateway("openai/gpt-4o"),
26 prompt: "Hello!",
27});
28
29const { text: claudeText } = await generateText({
30 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),
31 prompt: "Hello!",
32});

Streaming Responses

1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
2import { streamText } from "ai";
3
4const llmgateway = createLLMGateway({
5 apiKey: process.env.LLM_GATEWAY_API_KEY,
6});
7
8const { textStream } = await streamText({
9 model: llmgateway("anthropic/claude-3-5-sonnet-20241022"),
10 prompt: "Write a poem about coding",
11});
12
13for await (const text of textStream) {
14 process.stdout.write(text);
15}

Using in Next.js API Routes

1// app/api/chat/route.ts
2import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
3import { streamText } from "ai";
4
5const llmgateway = createLLMGateway({
6 apiKey: process.env.LLM_GATEWAY_API_KEY,
7});
8
9export async function POST(req: Request) {
10 const { messages } = await req.json();
11
12 const result = await streamText({
13 model: llmgateway("openai/gpt-4o"),
14 messages,
15 });
16
17 return result.toDataStreamResponse();
18}

Alternative: Using OpenAI SDK Adapter

If you prefer not to install a new package, you can use @ai-sdk/openai with a custom base URL:

1import { createOpenAI } from "@ai-sdk/openai";
2import { generateText } from "ai";
3
4const llmgateway = createOpenAI({
5 baseURL: "https://api.llmgateway.io/v1",
6 apiKey: process.env.LLM_GATEWAY_API_KEY,
7});
8
9const { text } = await generateText({
10 model: llmgateway("openai/gpt-4o"),
11 prompt: "Hello!",
12});

4. Update Environment Variables

1# Remove individual provider keys (optional - can keep as backup)
2# OPENAI_API_KEY=sk-...
3# ANTHROPIC_API_KEY=sk-ant-...
4
5# Add LLM Gateway key
6export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here

Model Name Format

LLM Gateway supports two model ID formats:

Root Model IDs (without provider prefix) - Uses smart routing to automatically select the best provider based on uptime, throughput, price, and latency:

1gpt-4o
2claude-3-5-sonnet-20241022
3gemini-1.5-pro

Provider-Prefixed Model IDs - Routes to a specific provider with automatic failover if uptime drops below 90%:

1openai/gpt-4o
2anthropic/claude-3-5-sonnet-20241022
3google-ai-studio/gemini-1.5-pro

For more details on routing behavior, see the routing documentation.

Model Mapping Examples

Vercel AI SDK LLM Gateway
openai("gpt-4o") llmgateway("gpt-4o") or llmgateway("openai/gpt-4o")
anthropic("claude-3-5-sonnet-20241022") llmgateway("claude-3-5-sonnet-20241022") or llmgateway("anthropic/claude-3-5-sonnet-20241022")
google("gemini-1.5-pro") llmgateway("gemini-1.5-pro") or llmgateway("google-ai-studio/gemini-1.5-pro")

Check the models page for the full list of available models.

Tool Calling

LLM Gateway supports tool calling through the AI SDK:

1import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
2import { generateText, tool } from "ai";
3import { z } from "zod";
4
5const llmgateway = createLLMGateway({
6 apiKey: process.env.LLM_GATEWAY_API_KEY,
7});
8
9const { text, toolResults } = await generateText({
10 model: llmgateway("openai/gpt-4o"),
11 tools: {
12 weather: tool({
13 description: "Get the weather for a location",
14 parameters: z.object({
15 location: z.string(),
16 }),
17 execute: async ({ location }) => {
18 return { temperature: 72, condition: "sunny" };
19 },
20 }),
21 },
22 prompt: "What's the weather in San Francisco?",
23});

Self-Hosting LLM Gateway

If you prefer self-hosting, LLM Gateway is available under AGPLv3:

1git clone https://github.com/llmgateway/llmgateway
2cd llmgateway
3pnpm install
4pnpm setup
5pnpm dev

This gives you the same managed experience with full control over your infrastructure.

Need Help?