Overview

Adaptive is a drop-in replacement for OpenAI that automatically routes your requests to the best AI model for each task. Simply change your base URL and start saving up to 80% on AI costs.
Zero code changes required - Works with any OpenAI-compatible client by just changing the base URL.

Step 1: Get Your API Key

1

Sign Up

Create a free account at llmadaptive.uk
2

Create Project

Set up a new project in your dashboard
3

Generate API Key

Generate your API key from the API Keys section
Keep your API key secure. Never expose it in client-side code or public repositories.

Step 2: Update Your Code

The only change needed is updating your base URL:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'your-adaptive-api-key',
  baseURL: 'https://llmadaptive.uk/api/v1'
});

// Use exactly like OpenAI
const completion = await openai.chat.completions.create({
  model: '', // Leave empty for intelligent routing
  messages: [
    { role: 'user', content: 'Explain quantum computing simply' }
  ],
});

console.log(completion.choices[0].message.content);

Step 3: Test Your Integration

Run your code to see Adaptive in action:

Key Differences from OpenAI

Base URL

OpenAI: https://api.openai.com/v1
Adaptive: https://llmadaptive.uk/api/v1

Model Selection

OpenAI: Specify exact model
Adaptive: Use "" for intelligent routing

Providers

OpenAI: OpenAI only
Adaptive: 6+ providers (OpenAI, Anthropic, Google, etc.)

Cost

OpenAI: Fixed pricing
Adaptive: Up to 80% savings

Advanced Configuration

Control Provider Selection

Limit which providers Adaptive can choose from:
const completion = await openai.chat.completions.create({
  model: '',
  messages: [{ role: 'user', content: 'Hello!' }],
  provider_constraints: ['openai', 'anthropic'], // Only use these
});

Cost vs Performance Balance

Control the cost/performance trade-off:
const completion = await openai.chat.completions.create({
  model: '',
  messages: [{ role: 'user', content: 'Hello!' }],
  cost_bias: 0.2, // 0 = cheapest, 1 = best performance
});

Streaming Support

Streaming works exactly like OpenAI:
const stream = await openai.chat.completions.create({
  model: '',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Environment Setup

For production deployments:
# .env file
ADAPTIVE_API_KEY=your-adaptive-api-key
ADAPTIVE_BASE_URL=https://llmadaptive.uk/api/v1
const openai = new OpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  baseURL: process.env.ADAPTIVE_BASE_URL
});

Error Handling

Handle errors the same way as OpenAI:
try {
  const completion = await openai.chat.completions.create({
    model: '',
    messages: [{ role: 'user', content: 'Hello!' }],
  });
} catch (error) {
  if (error.status === 401) {
    console.error('Invalid API key');
  } else if (error.status === 429) {
    console.error('Rate limit exceeded');
  } else {
    console.error('Request failed:', error.message);
  }
}

Verification Checklist

Next Steps

Need Help?

Support

Migration Guide

See our migration guide for detailed steps