Skip to main content

Common Issues

Authentication Problems

Problem: Getting authentication errors when making API calls.Solutions:
  1. Check your API key:
    # Verify your API key is set correctly
    echo $ADAPTIVE_API_KEY
    
  2. Ensure correct format:
    // Correct - no 'Bearer' prefix needed
    const openai = new OpenAI({
      apiKey: 'your-adaptive-api-key',
      baseURL: 'https://api.llmadaptive.uk/v1'
    });
    
  3. Verify API key validity:
    • Check if your API key has expired
    • Ensure you’re using the correct key for your environment
    • Try regenerating your API key in the dashboard
  4. Test with curl:
    curl -H "X-Stainless-API-Key: your-adaptive-api-key" \
         -H "Content-Type: application/json" \
         https://api.llmadaptive.uk/v1/chat/completions \
         -d '{"model":"","messages":[{"role":"user","content":"test"}]}'
    
Problem: Environment variable not being loaded.Solutions:
  1. Check environment variable:
    # In terminal
    export ADAPTIVE_API_KEY=your-key-here
    
    # Or in .env file
    echo "ADAPTIVE_API_KEY=your-key-here" >> .env
    
  2. Load environment variables:
    // Node.js
    require('dotenv').config();
    
    // Or using ES modules
    import 'dotenv/config';
    
  3. Python environment:
    import os
    from dotenv import load_dotenv
    
    load_dotenv()
    api_key = os.getenv("ADAPTIVE_API_KEY")
    

Configuration Issues

Problem: Using incorrect base URL causing connection failures.Correct base URL:
https://api.llmadaptive.uk/v1
Common mistakes:
// ❌ Wrong
baseURL: 'https://api.openai.com/v1'
baseURL: 'https://adaptive.ai/api/v1'
baseURL: 'https://www.llmadaptive.uk/v1'

// ✅ Correct
baseURL: 'https://api.llmadaptive.uk/v1'
Problem: Intelligent routing not working or model errors.Solutions:
  1. Use empty string for intelligent routing:
    // ✅ Correct - enables intelligent routing
    model: ""
    
    // ❌ Wrong - tries to use specific model
    model: ""
    model: ""
    model: ""
    
  2. TypeScript type issues:
    // Option 1: Type assertion
    model: "" as any
    
    // Option 2: Disable strict checking for this parameter
    // @ts-ignore
    model: ""
    
Problem: Certificate validation errors in some environments.Solutions:
  1. Update certificates:
    # Ubuntu/Debian
    sudo apt-get update && sudo apt-get install ca-certificates
    
    # macOS
    brew install ca-certificates
    
  2. Node.js certificate issues:
    // Temporary workaround (not recommended for production)
    process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
    
    // Better solution: update Node.js or certificates
    
  3. Python certificate issues:
    import ssl
    import certifi
    
    # Ensure certificates are up to date
    ssl.create_default_context(cafile=certifi.where())
    

Request/Response Issues

Problem: Getting empty responses or no content.Diagnostic steps:
  1. Check request format:
    const completion = await openai.chat.completions.create({
      model: "",
      messages: [
        { role: "user", content: "Hello" } // Ensure content is not empty
      ]
    });
    
  2. Verify response handling:
    console.log("Full response:", completion);
    console.log("Content:", completion.choices[0]?.message?.content);
    console.log("Provider:", completion.provider);
    
  3. Check for API errors:
    try {
      const completion = await openai.chat.completions.create({...});
    } catch (error) {
      console.log("Error details:", error);
      console.log("Status:", error.status);
      console.log("Message:", error.message);
    }
    
Problem: Streaming responses not appearing or failing.Solutions:
  1. Check streaming syntax:
    // ✅ Correct streaming setup
    const stream = await openai.chat.completions.create({
      model: "",
      messages: [...],
      stream: true
    });
    
    for await (const chunk of stream) {
      process.stdout.write(chunk.choices[0]?.delta?.content || '');
    }
    
  2. Browser streaming with fetch:
    const response = await fetch('/api/stream-chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ message })
    });
    
    const reader = response.body.getReader();
    const decoder = new TextDecoder();
    
    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      
      const chunk = decoder.decode(value);
      // Process chunk...
    }
    
  3. Server-sent events setup:
    // Server
    res.writeHead(200, {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive'
    });
    
Problem: Getting 429 errors (rate limit exceeded).Solutions:
  1. Implement exponential backoff:
    async function retryWithBackoff(fn, maxRetries = 3) {
      for (let i = 0; i < maxRetries; i++) {
        try {
          return await fn();
        } catch (error) {
          if (error.status === 429 && i < maxRetries - 1) {
            const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw error;
        }
      }
    }
    
  2. Check your rate limits:
    • Free tier: 100 requests/minute, 10,000 tokens/minute
    • Pro tier: 1,000 requests/minute, 100,000 tokens/minute
    • Enterprise: Custom limits
  3. Implement request queuing:
    class RequestQueue {
      constructor(maxPerMinute = 100) {
        this.queue = [];
        this.maxPerMinute = maxPerMinute;
        this.requestTimes = [];
      }
      
      async enqueue(requestFn) {
        return new Promise((resolve, reject) => {
          this.queue.push({ requestFn, resolve, reject });
          this.processQueue();
        });
      }
      
      async processQueue() {
        if (this.queue.length === 0) return;
        
        const now = Date.now();
        this.requestTimes = this.requestTimes.filter(time => now - time < 60000);
        
        if (this.requestTimes.length < this.maxPerMinute) {
          const { requestFn, resolve, reject } = this.queue.shift();
          this.requestTimes.push(now);
          
          try {
            const result = await requestFn();
            resolve(result);
          } catch (error) {
            reject(error);
          }
          
          // Process next request
          setTimeout(() => this.processQueue(), 100);
        } else {
          // Wait and try again
          setTimeout(() => this.processQueue(), 1000);
        }
      }
    }
    

Integration-Specific Issues

Problem: LangChain not working with Adaptive.Solutions:
  1. Correct LangChain setup:
    # Python
    from langchain_openai import ChatOpenAI
    
    llm = ChatOpenAI(
        api_key=os.getenv("ADAPTIVE_API_KEY"),
        base_url="https://api.llmadaptive.uk/v1",
        model=""  # Important: empty string
    )
    
    // JavaScript
    import { ChatOpenAI } from "@langchain/openai";
    
    const llm = new ChatOpenAI({
      apiKey: process.env.ADAPTIVE_API_KEY,
      configuration: {
        baseURL: "https://api.llmadaptive.uk/v1"
      },
      model: ""
    });
    
  2. Handle LangChain-specific errors:
    from openai import APIError
    
    try:
        response = llm.invoke("Hello")
    except APIError as e:
        print(f"API Error: {e}")
    except Exception as e:
        print(f"Other error: {e}")
    
Problem: Vercel AI SDK not connecting properly.Solutions:
  1. Using OpenAI provider method:
    import { openai } from '@ai-sdk/openai';
    
    const adaptiveOpenAI = openai({
      apiKey: process.env.ADAPTIVE_API_KEY,
      baseURL: 'https://api.llmadaptive.uk/v1',
    });
    
    const { text } = await generateText({
      model: adaptiveOpenAI(''), // Empty string for routing
      prompt: 'Hello'
    });
    
  2. TypeScript issues:
    // If getting type errors
    const model = adaptiveOpenAI('' as any);
    
  3. Environment variables in Next.js:
    // next.config.js
    module.exports = {
      env: {
        ADAPTIVE_API_KEY: process.env.ADAPTIVE_API_KEY,
      },
    };
    

Performance Issues

Problem: Responses taking longer than expected.Diagnostic steps:
  1. Check routing decisions:
    const completion = await openai.chat.completions.create({
      model: "",
      messages: [...]
    });
    
    console.log("Selected provider:", completion.provider);
    console.log("Selected model:", completion.model);
    
  2. Optimize with cost_bias:
    // Prefer faster, cheaper models
    const completion = await openai.chat.completions.create({
      model: "",
      messages: [...],
      cost_bias: 0.2 // 0 = cheapest/fastest, 1 = best quality
    });
    
  3. Use provider constraints for speed:
    // Route only to fast providers
    const completion = await openai.chat.completions.create({
      model: "",
      messages: [...],
      provider_constraint: ["groq", "gemini"] // Fast providers
    });
    
Problem: Network latency issues.Solutions:
  1. Check your network:
    # Test connectivity
    ping llmadaptive.uk
    
    # Test TLS handshake
    curl -w "@curl-format.txt" -o /dev/null https://api.llmadaptive.uk/v1/
    
  2. Implement timeout handling:
    const controller = new AbortController();
    const timeoutId = setTimeout(() => controller.abort(), 30000); // 30s timeout
    
    try {
      const completion = await openai.chat.completions.create({
        model: "",
        messages: [...]
      }, {
        signal: controller.signal
      });
    } catch (error) {
      if (error.name === 'AbortError') {
        console.log('Request timed out');
      }
    } finally {
      clearTimeout(timeoutId);
    }
    
  3. Use connection pooling:
    import https from 'https';
    
    const agent = new https.Agent({
      keepAlive: true,
      maxSockets: 10
    });
    
    const openai = new OpenAI({
      apiKey: process.env.ADAPTIVE_API_KEY,
      baseURL: 'https://api.llmadaptive.uk/v1',
      httpAgent: agent
    });
    

Development Environment Issues

Problem: Cross-origin resource sharing errors.Solutions:
  1. Never call API directly from browser:
    // ❌ Wrong - exposes API key
    // const completion = await openai.chat.completions.create({...});
    
    // ✅ Correct - use your backend
    const response = await fetch('/api/chat', {
      method: 'POST',
      body: JSON.stringify({ message })
    });
    
  2. Set up proxy in development:
    // Next.js API route
    // pages/api/chat.js
    export default async function handler(req, res) {
      const completion = await openai.chat.completions.create({
        model: "",
        messages: req.body.messages
      });
      
      res.json({ response: completion.choices[0].message.content });
    }
    
  3. Configure CORS for your backend:
    // Express.js
    app.use(cors({
      origin: ['http://localhost:3000', 'https://yourdomain.com'],
      credentials: true
    }));
    
Problem: TypeScript errors with Adaptive integration.Solutions:
  1. Install correct types:
    npm install --save-dev @types/node
    npm install openai  # Latest version includes types
    
  2. Type assertion for model parameter:
    const completion = await openai.chat.completions.create({
      model: "" as any, // Type assertion
      messages: [...]
    });
    
  3. Create custom types if needed:
    interface AdaptiveCompletion extends ChatCompletion {
      provider: string;
    }
    
Problem: ES modules vs CommonJS issues.Solutions:
  1. Use correct imports:
    // ES modules
    import OpenAI from 'openai';
    
    // CommonJS
    const OpenAI = require('openai');
    
  2. Package.json configuration:
    {
      "type": "module",
      "dependencies": {
        "openai": "^4.0.0"
      }
    }
    
  3. Node.js version compatibility:
    # Check Node.js version
    node --version
    
    # Adaptive requires Node.js 18+
    # Update if necessary
    

Getting Help

Debug Information to Collect

When reporting issues, please include:
1

Environment Details

# System info
node --version
npm --version

# Package versions
npm list openai
npm list @langchain/openai
2

Request Details

// Sanitized request (remove API key)
{
  "model": "",
  "messages": [...],
  "provider_constraint": [...],
  "cost_bias": 0.5
}
3

Error Information

console.log("Error status:", error.status);
console.log("Error message:", error.message);
console.log("Error stack:", error.stack);
4

Network Diagnostics

# Test connectivity
curl -I https://api.llmadaptive.uk/v1/

# DNS resolution
nslookup llmadaptive.uk

Support Channels

Documentation

Check our comprehensive guides and API reference for solutions

GitHub Issues

Report bugs and request features on our GitHub repository

Discord Community

Get help from the community and Adaptive team members

Email Support

Contact support@adaptive.com for priority assistance

Best Practices for Debugging

1

Start with Simple Requests

Test basic functionality first
const simple = await openai.chat.completions.create({
  model: "",
  messages: [{ role: "user", content: "Hello" }]
});
2

Enable Verbose Logging

Add detailed logging to understand what’s happening
console.log("Request:", JSON.stringify(requestData, null, 2));
console.log("Response:", JSON.stringify(response, null, 2));
3

Test with curl

Verify API access outside your application
curl -X POST https://api.llmadaptive.uk/v1/chat/completions \
  -H "X-Stainless-API-Key: $ADAPTIVE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"","messages":[{"role":"user","content":"test"}]}'
4

Isolate the Problem

Systematically narrow down the issue:
  • Test different messages
  • Try different parameters
  • Test in different environments
  • Compare with working examples

Complete Error Handling Example

Here’s a production-ready error handling implementation:
class AdaptiveClient {
  constructor(apiKey) {
    this.openai = new OpenAI({
      apiKey: apiKey,
      baseURL: 'https://api.llmadaptive.uk/v1'
    });
  }
  
  async createCompletion(params, retries = 3) {
    for (let attempt = 1; attempt <= retries; attempt++) {
      try {
        const completion = await this.openai.chat.completions.create({
          model: "",
          ...params
        });
        
        // Log success metrics
        console.log(`✅ Success: ${completion.provider} | ${completion.usage.total_tokens} tokens`);
        return completion;
        
      } catch (error) {
        // Handle specific errors
        if (error.status === 401) {
          throw new Error('Invalid API key - check your credentials');
        }
        
        if (error.status === 429) {
          const delay = Math.min(1000 * Math.pow(2, attempt), 10000);
          console.log(`⚠️  Rate limited, retrying in ${delay}ms (attempt ${attempt}/${retries})`);
          
          if (attempt < retries) {
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw new Error('Rate limit exceeded - reduce request frequency');
        }
        
        if (error.status === 400) {
          throw new Error(`Invalid request: ${error.message}`);
        }
        
        if (error.status >= 500) {
          const delay = 1000 * attempt;
          console.log(`🔄 Server error, retrying in ${delay}ms (attempt ${attempt}/${retries})`);
          
          if (attempt < retries) {
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw new Error('Server error - try again later');
        }
        
        // Unexpected error
        throw new Error(`Unexpected error: ${error.message}`);
      }
    }
  }
}

// Usage example
const client = new AdaptiveClient(process.env.ADAPTIVE_API_KEY);

try {
  const response = await client.createCompletion({
    messages: [{ role: "user", content: "Hello!" }],
    model_router: {
      cost_bias: 0.3,
      models: ["openai:gpt-5-mini", "anthropic:claude-sonnet-4-5"]
    }
  });
  
  console.log("Response:", response.choices[0].message.content);
} catch (error) {
  console.error("Failed to get completion:", error.message);
}

FAQ

Check your model_router.models configuration. Ensure the providers you want are included and your cost_bias setting allows for the provider selection you expect.
Check the provider field in the response:
console.log("Selected provider:", completion.provider);
console.log("Model used:", completion.model);
Use the model_router.models array with specific model names:
model_router: {
  models: [
    "openai:gpt-5-mini"
  ]
}
Check your cost_bias setting. A higher value (closer to 1) prioritizes quality over cost. Set it to 0.0-0.3 for maximum cost savings.
Set semantic cache to disabled in your request:
semantic_cache: {
  enabled: false
}