Quick Setup

  1. Install the OpenAI SDK (if you haven’t already):
npm install openai
  1. Change two lines in your existing code:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'your-adaptive-api-key',        // ← Your Adaptive API key
  baseURL: 'https://www.llmadaptive.uk/api/v1' // ← Only change needed
});

// Everything else works exactly the same
const completion = await openai.chat.completions.create({
  model: '', // Leave empty for intelligent routing
  messages: [{ role: 'user', content: 'Hello!' }]
});
That’s it! Your existing OpenAI code now uses intelligent routing.

Key Features

Same API

All OpenAI methods, parameters, and responses work identically

Intelligent Routing

Leave model empty ("") to automatically select the best provider

Streaming

Streaming responses work exactly like OpenAI

Function Calling

Function calling and tools work without changes

Streaming Example

const stream = await openai.chat.completions.create({
  model: '',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Response Format

Adaptive returns standard OpenAI responses with one extra field:
{
  "id": "chatcmpl-123",
  "choices": [{
    "message": {"content": "Hello! How can I help you?"}
  }],
  "usage": {"total_tokens": 21},
  "provider": "gemini"  // ← Shows which provider was used
}

Need More Control?