Get started with Adaptive by changing one line of code. No complex setup required.

Step 1: Get Your API Key

1

Sign Up

Create a free account at llmadaptive.uk
2

Generate Key

Generate your API key from the dashboard

Step 2: Install SDK (Optional)

npm install openai

Step 3: Make Your First Request

Choose your preferred language and framework:
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-adaptive-api-key',
  baseURL: 'https://www.llmadaptive.uk/api/v1'
});

const response = await client.chat.completions.create({
  model: '', // Leave empty for intelligent routing
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);

Key Features

Intelligent Routing

Leave model empty and let our AI choose the optimal provider for your request

Cost Savings

Save 60-80% on AI costs with automatic model selection

6+ Providers

Access OpenAI, Anthropic, Google, Groq, DeepSeek, and Grok

Drop-in Replacement

Works with existing OpenAI and Anthropic SDK code

Example Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-3.5-turbo",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Hello! I'm ready to help you."
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 5,
    "completion_tokens": 10,
    "total_tokens": 15
  },
  "provider": "gemini",
  "cache_tier": "none"
}
Adaptive returns standard OpenAI or Anthropic-compatible responses with additional metadata like provider and cache_tier to show which model was selected and performance information.

Testing Your Integration

1

Send Test Request

Run your code with a simple message like “Hello!” to verify the connection
2

Check Response

Confirm you receive a response and check the provider field to see which model was selected
3

Monitor Dashboard

View request logs and analytics in your Adaptive dashboard

Next Steps

Need Help?