Get started with Adaptive by changing one line of code. No complex setup required.
Step 1: Get Your API Key
Generate Key
Generate your API key from the dashboard
Step 2: Install SDK (Optional)
JavaScript/Node.js
Python
cURL
No installation required - cURL is available on most systems.
Step 3: Make Your First Request
Choose your preferred language and framework:
OpenAI SDK
Anthropic SDK
Gemini SDK
Vercel AI SDK
LangChain
JavaScript/Node.js
Python
cURL
import OpenAI from 'openai' ;
const client = new OpenAI ({
apiKey: 'your-adaptive-api-key' ,
baseURL: 'https://api.llmadaptive.uk/v1'
});
const response = await client . chat . completions . create ({
model: 'adaptive/auto' , // Leave empty for intelligent routing
messages: [{ role: 'user' , content: 'Hello!' }]
});
console . log ( response . choices [ 0 ]. message . content );
JavaScript/Node.js
Python
cURL
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
apiKey: 'your-adaptive-api-key' ,
baseURL: 'https://api.llmadaptive.uk/v1'
});
const response = await client . messages . create ({
model: 'adaptive/auto' , // Leave empty for intelligent routing
max_tokens: 1000 ,
messages: [{ role: 'user' , content: 'Hello!' }]
});
console . log ( response . content [ 0 ]. text );
JavaScript/Node.js
Python
cURL
import { GoogleGenerativeAI } from '@google/genai' ;
const genAI = new GoogleGenerativeAI ({
apiKey: process . env . ADAPTIVE_API_KEY || 'your-adaptive-api-key' ,
httpOptions: {
baseUrl: 'https://api.llmadaptive.uk/v1beta'
}
});
const model = genAI . getGenerativeModel ({ model: 'intelligent-routing' });
const result = await model . generateContent ({
contents: [
{
role: 'user' ,
parts: [{ text: 'Hello!' }]
}
],
generationConfig: {
maxOutputTokens: 512
}
});
console . log ( result . response . text ());
Basic Text Generation
Streaming
React Components
import { openai } from '@ai-sdk/openai' ;
import { generateText } from 'ai' ;
const { text } = await generateText ({
model: openai ( '' , {
baseURL: 'https://api.llmadaptive.uk/v1' ,
apiKey: 'your-adaptive-api-key'
}),
prompt: 'Hello!'
});
console . log ( text );
JavaScript/Node.js
Python
Chains
import { ChatOpenAI } from '@langchain/openai' ;
const model = new ChatOpenAI ({
openAIApiKey: 'your-adaptive-api-key' ,
configuration: {
baseURL: 'https://api.llmadaptive.uk/v1'
},
modelName: 'adaptive/auto' // Leave empty for intelligent routing
});
const response = await model . invoke ( 'Hello!' );
console . log ( response . content );
Error Handling
Always implement proper error handling in production. Adaptive provides detailed error information to help you build resilient applications.
TypeScript
Python
JavaScript (Browser)
import OpenAI from 'openai' ;
const client = new OpenAI ({
apiKey: process . env . ADAPTIVE_API_KEY ,
baseURL: 'https://api.llmadaptive.uk/v1'
});
async function chatWithRetry ( message : string , maxRetries = 3 ) {
for ( let attempt = 1 ; attempt <= maxRetries ; attempt ++ ) {
try {
const response = await client . chat . completions . create ({
model: 'adaptive/auto' ,
messages: [{ role: 'user' , content: message }]
});
return response . choices [ 0 ]. message . content ;
} catch ( error : any ) {
console . error ( `Attempt ${ attempt } failed:` , error . message );
// Check for FallbackError (unique to Adaptive)
if ( error . response ?. data ?. error ?. type === 'fallback_failed' ) {
const failures = error . response . data . error . details . failures ;
console . log ( 'Provider failures:' , failures . map ( f => ({
provider: f . provider ,
model: f . model ,
error: f . error ,
duration: f . duration_ms
})));
}
if ( attempt === maxRetries ) throw error ;
// Exponential backoff
await new Promise ( resolve =>
setTimeout ( resolve , Math . pow ( 2 , attempt ) * 1000 )
);
}
}
}
// Usage
try {
const result = await chatWithRetry ( 'Explain quantum computing' );
console . log ( result );
} catch ( error ) {
console . error ( 'All retries failed:' , error );
// Implement fallback strategy (cached response, default message, etc.)
}
Production Tip : Always log the request_id from error responses for debugging. For comprehensive error handling patterns, see the Error Handling Best Practices guide.
Key Features
Intelligent Routing Leave model empty and let our AI choose the optimal provider for your request
Cost Savings Save 60-90% on AI costs with automatic model selection
6+ Providers Access OpenAI, Anthropic, Google, Groq, DeepSeek, and Grok
Drop-in Replacement Works with existing OpenAI and Anthropic SDK code
Example Response
OpenAI Format
Anthropic Format
{
"id" : "chatcmpl-abc123" ,
"object" : "chat.completion" ,
"created" : 1677652288 ,
"model" : "gpt-5-nano" ,
"choices" : [{
"index" : 0 ,
"message" : {
"role" : "assistant" ,
"content" : "Hello! I'm ready to help you."
},
"finish_reason" : "stop"
}],
"usage" : {
"prompt_tokens" : 5 ,
"completion_tokens" : 10 ,
"total_tokens" : 15
}
}
Adaptive returns standard OpenAI or Anthropic-compatible responses.
Testing Your Integration
Send Test Request
Run your code with a simple message like “Hello!” to verify the connection
Check Response
Confirm you receive a response and check the provider field to see which model was selected
Next Steps
Need Help?