Skip to main content

Get Your Adaptive API Key

Sign up here to create an account and generate your API key.

Overview

The Vercel AI SDK works seamlessly with Adaptive through two methods:
  • Adaptive Provider (Recommended): Use the native @adaptive-llm/adaptive-ai-provider provider for built-in support.
  • OpenAI Provider: Use Adaptive via @ai-sdk/openai with a custom base URL.

Method 1: Adaptive Provider

Installation

npm install ai @adaptive-llm/adaptive-ai-provider

Basic Setup

import { adaptive } from "@adaptive-llm/adaptive-ai-provider";

// Use default configuration
const model = adaptive();

Method 2: OpenAI Provider

Installation

npm install ai @ai-sdk/openai

Configuration

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const adaptiveOpenAI = openai({
  apiKey: process.env.ADAPTIVE_API_KEY,
  baseURL: 'https://api.llmadaptive.uk/v1',
});

const { text } = await generateText({
  model: adaptiveOpenAI(''), // Empty string enables intelligent routing
  prompt: 'Explain quantum computing simply',
});

Text Generation

import { generateText } from 'ai';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

const { text } = await generateText({
  model: adaptive(),
  prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Streaming

import { streamText } from 'ai';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

const { textStream } = await streamText({
  model: adaptive(),
  prompt: 'Explain machine learning step by step',
});

for await (const delta of textStream) {
  process.stdout.write(delta);
}

React Chat Component

import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: '/api/chat', // Your API route using Adaptive
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          <strong>{m.role}: </strong>
          {m.content}
        </div>
      ))}
      
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Say something..."
          onChange={handleInputChange}
        />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Configuration Parameters

Advanced configuration options are available with the Adaptive provider for intelligent routing and optimization.
import { generateText } from 'ai';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

await generateText({
  model: adaptive(),
  prompt: "Summarize this article",
  providerOptions: {
    adaptive: {
      // Intelligent routing configuration
      model_router: {
        models: [
          "anthropic:claude-sonnet-4-5", // Specific Anthropic model
          "openai:gpt-5-mini" // Specific OpenAI model
        ],
        cost_bias: 0.3, // 0 = cheapest, 1 = best performance
        complexity_threshold: 0.5, // Override complexity detection
        token_threshold: 1000 // Override token threshold
      }
    }
  }
});

Parameter Details

Controls intelligent model selection:
  • models: Array of allowed provider:model strings
    • "anthropic:claude-sonnet-4-5" - Specific model
    • "openai:gpt-5-mini" - Specific model
  • cost_bias: Balance cost vs performance (0-1)
    • 0 = Always choose cheapest option
    • 0.5 = Balanced cost and performance
    • 1 = Always choose best performance
  • complexity_threshold: Override automatic complexity detection (0-1)
  • token_threshold: Override automatic token counting threshold
Controls provider fallback behavior:
  • enabled: Enable/disable fallback (default: true)
  • mode: Fallback strategy
    • "sequential" = Try providers one by one (lower cost)
    • "race" = Try multiple providers simultaneously (faster)
Improves performance by caching similar requests:
  • enabled: Enable semantic caching
  • similarity_threshold: Similarity threshold (0-1) for cache hits
    • Higher values = more strict matching
    • Lower values = more cache hits but less accuracy

Custom Providers

Custom providers are no longer supported. All models must be specified using the provider:model_name string format for models available in the registry.
Custom Provider Configuration
await generateText({
  model: adaptive(),
  prompt: "Explain machine learning concepts",
  providerOptions: {
    adaptive: {
      // Include registry models in model list
      model_router: {
        models: [
          "openai:gpt-5-mini", // Standard provider
          "anthropic:claude-3-5-haiku"
        ],
        cost_bias: 0.5
      }
    }
  }
});

Tool/Function Calling

import { generateText, tool } from 'ai';
import { z } from 'zod';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

const { text } = await generateText({
  model: adaptive(),
  prompt: "What's the weather in New York?",
  tools: {
    getWeather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        return `Weather in ${location} is sunny and 72°F`;
      },
    }),
  },
});

Cache Tier Tracking

Access cache information in the response when using the Adaptive provider:
Cache Tier Information
const result = await generateText({
  model: adaptive(),
  prompt: "Hello world",
});

// Check cache tier
console.log(result.usage?.cache_tier);
// "semantic" | undefined

Environment Variables

Environment Setup
# .env.local
ADAPTIVE_API_KEY=your-adaptive-api-key

What You Get

Intelligent Routing

Automatic model selection based on your prompt complexity

Built-in Streaming

Real-time response streaming with React components

Cost Optimization

Significant cost savings through smart provider selection

Provider Transparency

See which AI provider was used for each request

Next Steps