Overview

The Vercel AI SDK works seamlessly with Adaptive through two methods:
  • Adaptive Provider (Recommended): Use the native @adaptive-llm/adaptive-ai-provider provider for built-in support.
  • OpenAI Provider: Use Adaptive via @ai-sdk/openai with a custom base URL.

Method 1: Adaptive Provider

Installation

npm install ai @adaptive-llm/adaptive-ai-provider

Basic Setup

import { adaptive } from "@adaptive-llm/adaptive-ai-provider";

// Use default configuration
const model = adaptive();

Method 2: OpenAI Provider

Installation

npm install ai @ai-sdk/openai

Configuration

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const adaptiveOpenAI = openai({
  apiKey: process.env.ADAPTIVE_API_KEY,
  baseURL: 'https://www.llmadaptive.uk/api/v1',
});

const { text } = await generateText({
  model: adaptiveOpenAI(''), // Empty string enables intelligent routing
  prompt: 'Explain quantum computing simply',
});

Text Generation

import { generateText } from 'ai';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

const { text } = await generateText({
  model: adaptive(),
  prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Streaming

import { streamText } from 'ai';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

const { textStream } = await streamText({
  model: adaptive(),
  prompt: 'Explain machine learning step by step',
});

for await (const delta of textStream) {
  process.stdout.write(delta);
}

React Chat Component

import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: '/api/chat', // Your API route using Adaptive
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          <strong>{m.role}: </strong>
          {m.content}
        </div>
      ))}
      
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Say something..."
          onChange={handleInputChange}
        />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Configuration Parameters

Advanced configuration options are available with the Adaptive provider for intelligent routing and optimization.
import { generateText } from 'ai';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

await generateText({
  model: adaptive(),
  prompt: "Summarize this article",
  providerOptions: {
    adaptive: {
      // Intelligent routing configuration
      model_router: {
        models: [
          { provider: "anthropic" }, // All Anthropic models
          { provider: "openai", model_name: "gpt-4" } // Specific OpenAI model
        ],
        cost_bias: 0.3, // 0 = cheapest, 1 = best performance
        complexity_threshold: 0.5, // Override complexity detection
        token_threshold: 1000 // Override token threshold
      }
    }
  }
});

Parameter Details


Custom Providers

Configure custom providers alongside standard ones using the Adaptive provider:
Custom Provider Configuration
await generateText({
  model: adaptive(),
  prompt: "Explain machine learning concepts",
  providerOptions: {
    adaptive: {
      // Include custom provider in model list
      model_router: {
        models: [
          { provider: "openai" }, // Standard provider
          {
            provider: "my-custom-llm", // Custom provider
            model_name: "custom-model-v1",
            cost_per_1m_input_tokens: 2.0,
            cost_per_1m_output_tokens: 6.0,
            max_context_tokens: 16000,
            max_output_tokens: 4000,
            supports_function_calling: true,
            task_type: "Text Generation",
            complexity: "medium"
          }
        ],
        cost_bias: 0.5
      },
      
      // Configure each custom provider
      provider_configs: {
        "my-custom-llm": {
          base_url: "https://api.mycustom.com/v1",
          api_key: "sk-custom-api-key-here",
          auth_type: "bearer",
          headers: {
            "X-Custom-Header": "value"
          },
          timeout_ms: 45000
        }
      }
    }
  }
});

Tool/Function Calling

import { generateText, tool } from 'ai';
import { z } from 'zod';
import { adaptive } from '@adaptive-llm/adaptive-ai-provider';

const { text } = await generateText({
  model: adaptive(),
  prompt: "What's the weather in New York?",
  tools: {
    getWeather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        return `Weather in ${location} is sunny and 72°F`;
      },
    }),
  },
});

Cache Tier Tracking

Access cache information in the response when using the Adaptive provider:
Cache Tier Information
const result = await generateText({
  model: adaptive(),
  prompt: "Hello world",
});

// Check cache tier
console.log(result.usage?.cache_tier);
// "semantic_exact" | "semantic_similar" | "prompt_response" | undefined

Environment Variables

Environment Setup
# .env.local
ADAPTIVE_API_KEY=your-adaptive-api-key

What You Get

Intelligent Routing

Automatic model selection based on your prompt complexity

Built-in Streaming

Real-time response streaming with React components

Cost Optimization

Significant cost savings through smart provider selection

Provider Transparency

See which AI provider was used for each request

Next Steps