Skip to main content
GET
/
v1
/
models
curl https://api.llmadaptive.uk/v1/models \
  -H "Authorization: Bearer $ADAPTIVE_API_KEY"
[
  {
    "id": 1,
    "openrouter_id": "openai/gpt-5-mini",
    "provider": "openai",
    "model_name": "gpt-5-mini",
    "display_name": "GPT-5 Mini",
    "description": "Affordable and intelligent small model for fast, lightweight tasks",
    "context_length": 128000,
    "pricing": {
      "prompt": "0.00015",
      "completion": "0.0006",
      "request": "0",
      "image": "0"
    },
    "architecture": {
      "modality": "text+image->text",
      "input_modalities": ["text", "image"],
      "output_modalities": ["text"],
      "tokenizer": "o200k_base",
      "instruct_type": null
    },
    "top_provider": {
      "context_length": 128000,
      "max_completion_tokens": 16384,
      "is_moderated": true
    },
    "supported_parameters": [
      "temperature",
      "top_p",
      "max_tokens",
      "tools",
      "response_format",
      "seed"
    ],
    "default_parameters": {
      "temperature": 1.0
    },
    "endpoints": [
      {
        "name": "openai/gpt-5-mini",
        "model_name": "GPT-5 Mini",
        "context_length": 128000,
        "pricing": {
          "prompt": "0.00015",
          "completion": "0.0006",
          "request": "0",
          "image": "0.2890"
        },
        "provider_name": "OpenAI",
        "tag": "openai",
        "max_completion_tokens": 16384,
        "supported_parameters": [
          "temperature",
          "top_p",
          "tools",
          "response_format"
        ],
        "status": 0,
        "supports_implicit_caching": false
      }
    ],
    "created_at": "2025-01-15T10:30:00Z",
    "last_updated": "2025-01-20T14:45:00Z"
  }
]

Overview

The Models API provides access to Adaptive’s comprehensive model registry, which contains detailed information about available LLM models including pricing, capabilities, context limits, and provider details. Use this API to:
  • Discover available models across all providers
  • Get detailed pricing and capability information
  • Filter models by provider
  • Retrieve specific model details for integration

Registry Model System

Adaptive maintains a centralized Model Registry that tracks comprehensive information about LLM models from multiple providers (OpenAI, Anthropic, Google, DeepSeek, Groq, and more).

What is a Registry Model?

A Registry Model is a comprehensive data structure containing:
  • Identity: Provider, model name, OpenRouter ID
  • Pricing: Input/output token costs, per-request costs
  • Capabilities: Context length, supported parameters, tool calling support
  • Architecture: Modality, tokenizer, instruction format
  • Provider Info: Top provider configuration, available endpoints
  • Metadata: Display name, description, timestamps

How the Registry Works

  1. Centralized Data Source: The registry service maintains up-to-date model information
  2. Automatic Lookups: When you specify a provider or model, Adaptive queries the registry
  3. Auto-Fill: Known models automatically get pricing and capability data filled in

Endpoints

List All Models

provider
string
Optional provider filter (e.g., “openai”, “anthropic”, “google”)
curl https://api.llmadaptive.uk/v1/models \
  -H "Authorization: Bearer $ADAPTIVE_API_KEY"
[
  {
    "id": 1,
    "openrouter_id": "openai/gpt-5-mini",
    "provider": "openai",
    "model_name": "gpt-5-mini",
    "display_name": "GPT-5 Mini",
    "description": "Affordable and intelligent small model for fast, lightweight tasks",
    "context_length": 128000,
    "pricing": {
      "prompt": "0.00015",
      "completion": "0.0006",
      "request": "0",
      "image": "0"
    },
    "architecture": {
      "modality": "text+image->text",
      "input_modalities": ["text", "image"],
      "output_modalities": ["text"],
      "tokenizer": "o200k_base",
      "instruct_type": null
    },
    "top_provider": {
      "context_length": 128000,
      "max_completion_tokens": 16384,
      "is_moderated": true
    },
    "supported_parameters": [
      "temperature",
      "top_p",
      "max_tokens",
      "tools",
      "response_format",
      "seed"
    ],
    "default_parameters": {
      "temperature": 1.0
    },
    "endpoints": [
      {
        "name": "openai/gpt-5-mini",
        "model_name": "GPT-5 Mini",
        "context_length": 128000,
        "pricing": {
          "prompt": "0.00015",
          "completion": "0.0006",
          "request": "0",
          "image": "0.2890"
        },
        "provider_name": "OpenAI",
        "tag": "openai",
        "max_completion_tokens": 16384,
        "supported_parameters": [
          "temperature",
          "top_p",
          "tools",
          "response_format"
        ],
        "status": 0,
        "supports_implicit_caching": false
      }
    ],
    "created_at": "2025-01-15T10:30:00Z",
    "last_updated": "2025-01-20T14:45:00Z"
  }
]

Get Model by Name

id
string
required
Model identifier (e.g., “gpt-5-mini”, “claude-sonnet-4-5”)
curl https://api.llmadaptive.uk/v1/models/gpt-5-mini \
  -H "Authorization: Bearer $ADAPTIVE_API_KEY"
{
  "id": 1,
  "openrouter_id": "openai/gpt-5-mini",
  "provider": "openai",
  "model_name": "gpt-5-mini",
  "display_name": "GPT-5 Mini",
  "description": "Affordable and intelligent small model for fast, lightweight tasks",
  "context_length": 128000,
  "pricing": {
    "prompt": "0.00015",
    "completion": "0.0006",
    "request": "0",
    "image": "0"
  },
  "architecture": {
    "modality": "text+image->text",
    "input_modalities": ["text", "image"],
    "output_modalities": ["text"],
    "tokenizer": "o200k_base",
    "instruct_type": null
  },
  "top_provider": {
    "context_length": 128000,
    "max_completion_tokens": 16384,
    "is_moderated": true
  },
  "supported_parameters": [
    "temperature",
    "top_p",
    "max_tokens",
    "tools",
    "response_format",
    "seed"
  ],
  "default_parameters": {
    "temperature": 1.0
  },
  "endpoints": [
    {
      "name": "openai/gpt-5-mini",
      "model_name": "GPT-5 Mini",
      "context_length": 128000,
      "pricing": {
        "prompt": "0.00015",
        "completion": "0.0006",
        "request": "0",
        "image": "0.2890"
      },
      "provider_name": "OpenAI",
      "tag": "openai",
      "max_completion_tokens": 16384,
      "supported_parameters": [
        "temperature",
        "top_p",
        "tools",
        "response_format"
      ],
      "status": 0,
      "supports_implicit_caching": false
    }
  ],
  "created_at": "2025-01-15T10:30:00Z",
  "last_updated": "2025-01-20T14:45:00Z"
}

Response Schema

RegistryModel Object

FieldTypeDescription
idintegerDatabase ID (internal use)
openrouter_idstringOpenRouter model identifier (primary lookup key)
providerstringProvider name (e.g., “openai”, “anthropic”)
model_namestringModel identifier for API calls
display_namestringHuman-readable model name
descriptionstringModel description and use cases
context_lengthintegerMaximum context window size in tokens
pricingobjectPricing information (see below)
architectureobjectModel architecture details (see below)
top_providerobjectTop provider configuration (see below)
supported_parametersarraySupported API parameters
default_parametersobjectDefault parameter values
endpointsarrayAvailable provider endpoints (see below)
created_atstringCreation timestamp (ISO 8601)
last_updatedstringLast update timestamp (ISO 8601)

Pricing Object

FieldTypeDescription
promptstringCost per input token (USD, string format)
completionstringCost per output token (USD, string format)
requeststringCost per request (optional)
imagestringCost per image (optional)
web_searchstringCost for web search (optional)
Note: Pricing is in string format to preserve precision. Multiply by 1M for cost per million tokens.

Architecture Object

FieldTypeDescription
modalitystringInput/output modality (e.g., “text->text”, “text+image->text”)
input_modalitiesarraySupported input types
output_modalitiesarraySupported output types
tokenizerstringTokenizer used (e.g., “cl100k_base”, “o200k_base”)
instruct_typestringInstruction format (e.g., “chatml”, null)

TopProvider Object

FieldTypeDescription
context_lengthintegerProvider’s context limit
max_completion_tokensintegerMaximum output tokens
is_moderatedbooleanWhether content is moderated

Endpoint Object

FieldTypeDescription
namestringFull endpoint name
model_namestringDisplay model name
context_lengthintegerContext length for this endpoint
pricingobjectEndpoint-specific pricing
provider_namestringProvider name
tagstringProvider tag/slug
max_completion_tokensintegerMax completion tokens
supported_parametersarraySupported parameters
statusintegerStatus code (0 = active)
supports_implicit_cachingbooleanImplicit caching support

Common Use Cases

1. Discover Available Models

Query all models to see what’s available:
import requests

response = requests.get(
    "https://api.llmadaptive.uk/v1/models",
    headers={"Authorization": f"Bearer {api_key}"}
)

models = response.json()
for model in models:
    print(f"{model['provider']}/{model['model_name']}: {model['display_name']}")

2. Compare Pricing Across Providers

Find the cheapest model for your use case:
import requests

response = requests.get(
    "https://api.llmadaptive.uk/v1/models",
    headers={"Authorization": f"Bearer {api_key}"}
)

models = response.json()

# Calculate average cost per token
for model in models:
    input_cost = float(model['pricing']['prompt'])
    output_cost = float(model['pricing']['completion'])
    avg_cost = (input_cost + output_cost) / 2

    print(f"{model['display_name']}: ${avg_cost * 1_000_000:.2f} per 1M tokens")

3. Check Tool Calling Support

Find models that support function calling:
import requests

response = requests.get(
    "https://api.llmadaptive.uk/v1/models",
    headers={"Authorization": f"Bearer {api_key}"}
)

models = response.json()

tool_calling_models = [
    model for model in models
    if "tools" in model.get('supported_parameters', []) or
       "functions" in model.get('supported_parameters', [])
]

print(f"Found {len(tool_calling_models)} models with tool calling support")

4. Get Provider-Specific Models

List all models from a specific provider:
import requests

response = requests.get(
    "https://api.llmadaptive.uk/v1/models?provider=anthropic",
    headers={"Authorization": f"Bearer {api_key}"}
)

anthropic_models = response.json()
for model in anthropic_models:
    print(f"{model['model_name']}: {model['context_length']} token context")

5. Validate Model Before Using

Check if a model exists and get its capabilities:
import requests

model_name = "gpt-5-mini"

response = requests.get(
    f"https://api.llmadaptive.uk/v1/models/{model_name}",
    headers={"Authorization": f"Bearer {api_key}"}
)

if response.status_code == 200:
    model = response.json()
    print(f"✅ Model exists: {model['display_name']}")
    print(f"Context: {model['context_length']} tokens")
    print(f"Supports tools: {'tools' in model['supported_parameters']}")
else:
    print(f"❌ Model not found: {model_name}")

Integration with Other APIs

Use with Chat Completions

Combine with the Chat Completions API for intelligent routing:
import requests

# 1. Query registry for available models
models_response = requests.get(
    "https://api.llmadaptive.uk/v1/models?provider=openai",
    headers={"Authorization": f"Bearer {api_key}"}
)
available_models = models_response.json()

# 2. Use models in chat completion with intelligent routing
chat_response = requests.post(
    "https://api.llmadaptive.uk/v1/chat/completions",
    headers={"Authorization": f"Bearer {api_key}"},
    json={
        "model": "",  # Empty for intelligent routing
        "messages": [{"role": "user", "content": "Hello"}],
        "model_router": {
            "models": [
                f"{m['provider']}:{m['model_name']}"
                for m in available_models[:3]  # Use top 3 models
            ]
        }
    }
)

Use with Select Model API

Combine with the Select Model API for explicit selection:
import requests

# 1. Get models from registry
models_response = requests.get(
    "https://api.llmadaptive.uk/v1/models",
    headers={"Authorization": f"Bearer {api_key}"}
)
models = models_response.json()

# 2. Use select-model to choose best model for prompt
selection_response = requests.post(
    "https://api.llmadaptive.uk/v1/select-model",
    headers={"Authorization": f"Bearer {api_key}"},
    json={
        "prompt": "Write a Python function to process CSV files",
        "models": [
            f"{m['provider']}:{m['model_name']}"
            for m in models
            if "tools" in m.get("supported_parameters", [])
        ]
    }
)

Best Practices

1. Cache Registry Data

Cache model information to reduce API calls:
import requests
from datetime import datetime, timedelta

class ModelRegistry:
    def __init__(self, api_key):
        self.api_key = api_key
        self.cache = {}
        self.cache_expiry = None
        self.cache_duration = timedelta(hours=1)

    def get_models(self, provider=None):
        # Check cache
        if self.cache and self.cache_expiry and datetime.now() < self.cache_expiry:
            if provider:
                return [m for m in self.cache.get('models', [])
                       if m['provider'] == provider]
            return self.cache.get('models', [])

        # Fetch from API
        url = "https://api.llmadaptive.uk/v1/models"
        if provider:
            url += f"?provider={provider}"

        response = requests.get(
            url,
            headers={"Authorization": f"Bearer {self.api_key}"}
        )

        models = response.json()
        self.cache = {'models': models}
        self.cache_expiry = datetime.now() + self.cache_duration

        return models

2. Handle Registry Failures Gracefully

Always have fallback options:
import requests

def get_models_with_fallback(api_key, provider=None):
    try:
        url = "https://api.llmadaptive.uk/v1/models"
        if provider:
            url += f"?provider={provider}"

        response = requests.get(
            url,
            headers={"Authorization": f"Bearer {api_key}"},
            timeout=5
        )
        response.raise_for_status()
        return response.json()

    except Exception as e:
        print(f"Registry query failed: {e}")
        # Return fallback models
        return [
            {"provider": "openai", "model_name": "gpt-5-mini"},
            {"provider": "anthropic", "model_name": "claude-sonnet-4-5"}
        ]

3. Filter by Capabilities

Select models based on your requirements:
def find_models_by_criteria(api_key, min_context=100000, supports_tools=True):
    response = requests.get(
        "https://api.llmadaptive.uk/v1/models",
        headers={"Authorization": f"Bearer {api_key}"}
    )

    models = response.json()

    filtered = []
    for model in models:
        # Check context length
        if model['context_length'] < min_context:
            continue

        # Check tool support
        if supports_tools and 'tools' not in model.get('supported_parameters', []):
            continue

        filtered.append(model)

    return filtered

Error Handling

Status CodeDescriptionSolution
200SuccessProcess returned models
400Bad RequestCheck model ID parameter
404Not FoundModel doesn’t exist in registry
502Bad GatewayRegistry service unavailable, use fallback