Overview
The Vercel AI SDK works seamlessly with Adaptive through two methods:- Adaptive Provider (Recommended): Use the native
@adaptive-llm/adaptive-ai-provider
provider for built-in support. - OpenAI Provider: Use Adaptive via
@ai-sdk/openai
with a custom base URL.
Method 1: Adaptive Provider
Installation
Basic Setup
Method 2: OpenAI Provider
Installation
Configuration
Text Generation
Streaming
React Chat Component
Configuration Parameters
Advanced configuration options are available with the Adaptive provider for intelligent routing and optimization.
Parameter Details
model_router - Intelligent Model Selection
model_router - Intelligent Model Selection
Controls intelligent model selection:
models
: Array of allowed providers/models{ provider: "openai" }
- All models from provider{ provider: "anthropic", model_name: "claude-3-sonnet" }
- Specific model
cost_bias
: Balance cost vs performance (0-1)0
= Always choose cheapest option0.5
= Balanced cost and performance1
= Always choose best performance
complexity_threshold
: Override automatic complexity detection (0-1)token_threshold
: Override automatic token counting threshold
fallback - Provider Fallback Behavior
fallback - Provider Fallback Behavior
Controls provider fallback behavior:
enabled
: Enable/disable fallback (default: true)mode
: Fallback strategy"sequential"
= Try providers one by one (lower cost)"race"
= Try multiple providers simultaneously (faster)
prompt_response_cache - Semantic Caching
prompt_response_cache - Semantic Caching
Improves performance by caching similar requests:
enabled
: Enable semantic cachingsemantic_threshold
: Similarity threshold (0-1) for cache hits- Higher values = more strict matching
- Lower values = more cache hits but less accuracy
prompt_cache - Ultra-Fast Caching
prompt_cache - Ultra-Fast Caching
Ultra-fast caching for identical requests:
enabled
: Enable prompt response caching for this requestttl
: Cache duration in seconds (default: 3600, i.e., 1 hour)- Provides sub-millisecond response times for repeated requests
- Only successful responses are cached
Custom Providers
Configure custom providers alongside standard ones using the Adaptive provider:Custom Provider Configuration
Tool/Function Calling
Cache Tier Tracking
Access cache information in the response when using the Adaptive provider:Cache Tier Information
Environment Variables
Environment Setup
What You Get
Intelligent Routing
Automatic model selection based on your prompt complexity
Built-in Streaming
Real-time response streaming with React components
Cost Optimization
Significant cost savings through smart provider selection
Provider Transparency
See which AI provider was used for each request