Common Issues

Authentication Problems

Configuration Issues

Request/Response Issues

Integration-Specific Issues

Performance Issues

Development Environment Issues

Getting Help

Debug Information to Collect

When reporting issues, please include:
1

Environment Details

# System info
node --version
npm --version

# Package versions
npm list openai
npm list @langchain/openai
2

Request Details

// Sanitized request (remove API key)
{
  "model": "",
  "messages": [...],
  "provider_constraint": [...],
  "cost_bias": 0.5
}
3

Error Information

console.log("Error status:", error.status);
console.log("Error message:", error.message);
console.log("Error stack:", error.stack);
4

Network Diagnostics

# Test connectivity
curl -I https://www.llmadaptive.uk/api/v1/

# DNS resolution
nslookup llmadaptive.uk

Support Channels

Documentation

Check our comprehensive guides and API reference for solutions

GitHub Issues

Report bugs and request features on our GitHub repository

Discord Community

Get help from the community and Adaptive team members

Email Support

Contact support@adaptive.com for priority assistance

Best Practices for Debugging

1

Start with Simple Requests

Test basic functionality first
const simple = await openai.chat.completions.create({
  model: "",
  messages: [{ role: "user", content: "Hello" }]
});
2

Enable Verbose Logging

Add detailed logging to understand what’s happening
console.log("Request:", JSON.stringify(requestData, null, 2));
console.log("Response:", JSON.stringify(response, null, 2));
3

Test with curl

Verify API access outside your application
curl -X POST https://www.llmadaptive.uk/api/v1/chat/completions \
  -H "X-Stainless-API-Key: $ADAPTIVE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"","messages":[{"role":"user","content":"test"}]}'
4

Isolate the Problem

Systematically narrow down the issue:
  • Test different messages
  • Try different parameters
  • Test in different environments
  • Compare with working examples

Complete Error Handling Example

Here’s a production-ready error handling implementation:
class AdaptiveClient {
  constructor(apiKey) {
    this.openai = new OpenAI({
      apiKey: apiKey,
      baseURL: 'https://www.llmadaptive.uk/api/v1'
    });
  }
  
  async createCompletion(params, retries = 3) {
    for (let attempt = 1; attempt <= retries; attempt++) {
      try {
        const completion = await this.openai.chat.completions.create({
          model: "",
          ...params
        });
        
        // Log success metrics
        console.log(`✅ Success: ${completion.provider} | ${completion.usage.total_tokens} tokens`);
        return completion;
        
      } catch (error) {
        // Handle specific errors
        if (error.status === 401) {
          throw new Error('Invalid API key - check your credentials');
        }
        
        if (error.status === 429) {
          const delay = Math.min(1000 * Math.pow(2, attempt), 10000);
          console.log(`⚠️  Rate limited, retrying in ${delay}ms (attempt ${attempt}/${retries})`);
          
          if (attempt < retries) {
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw new Error('Rate limit exceeded - reduce request frequency');
        }
        
        if (error.status === 400) {
          throw new Error(`Invalid request: ${error.message}`);
        }
        
        if (error.status >= 500) {
          const delay = 1000 * attempt;
          console.log(`🔄 Server error, retrying in ${delay}ms (attempt ${attempt}/${retries})`);
          
          if (attempt < retries) {
            await new Promise(resolve => setTimeout(resolve, delay));
            continue;
          }
          throw new Error('Server error - try again later');
        }
        
        // Unexpected error
        throw new Error(`Unexpected error: ${error.message}`);
      }
    }
  }
}

// Usage example
const client = new AdaptiveClient(process.env.ADAPTIVE_API_KEY);

try {
  const response = await client.createCompletion({
    messages: [{ role: "user", content: "Hello!" }],
    model_router: {
      cost_bias: 0.3,
      models: [{ provider: "openai" }, { provider: "anthropic" }]
    }
  });
  
  console.log("Response:", response.choices[0].message.content);
} catch (error) {
  console.error("Failed to get completion:", error.message);
}

FAQ