Skip to main content
GET
/
v1
/
models
curl https://api.llmadaptive.uk/v1/models \
  -H "Authorization: Bearer apk_123456"
[
  {
    "model_name": "gpt-4",
    "display_name": "GPT-4",
    "context_length": 128000,
    "pricing": {
      "prompt_cost": "0.00003",
      "completion_cost": "0.00006"
    },
    "supported_parameters": [
      {"parameter_name": "temperature"},
      {"parameter_name": "tools"}
    ],
    "providers": [
      {
        "name": "openai",
        "status": 0,
        "context_length": 128000
      }
    ]
  }
  // ... more models
]

Overview

Query Adaptive’s model registry to discover available LLM models, their pricing, capabilities, and provider information. Use this API to find models that match your requirements and get up-to-date pricing details. Common use cases:
  • List all available models
  • Filter models by provider, features, or price
  • Get pricing information for cost estimation
  • Find models with specific capabilities (e.g., vision, tool calling)

Quick Start

Get all available models:
curl https://api.llmadaptive.uk/v1/models \
  -H "Authorization: Bearer apk_123456"
Filter by provider (e.g., OpenAI):
curl "https://api.llmadaptive.uk/v1/models?provider=openai" \
  -H "Authorization: Bearer apk_123456"
Get a specific model:
curl https://api.llmadaptive.uk/v1/models/gpt-4 \
  -H "Authorization: Bearer apk_123456"
Responses include key fields like model_name, display_name, context_length, pricing, and available providers.

Endpoints

List All Models

Common Query Parameters:
provider
string
Filter by provider (e.g., “openai”, “anthropic”, “google”). Repeat parameter for multiple providers.
min_context_length
integer
Filter by minimum context length (e.g., 128000 for 128K tokens)
supported_param
string[]
Filter by required features (repeatable). Example: ?supported_param=tools for tool calling support
Query Parameter Syntax: For multiple values, repeat the parameter name:
  • ✅ Correct: ?author=openai&author=anthropic
  • ❌ Incorrect: ?author=openai,anthropic (comma-separated not supported)
curl https://api.llmadaptive.uk/v1/models \
  -H "Authorization: Bearer apk_123456"
[
  {
    "model_name": "gpt-4",
    "display_name": "GPT-4",
    "context_length": 128000,
    "pricing": {
      "prompt_cost": "0.00003",
      "completion_cost": "0.00006"
    },
    "supported_parameters": [
      {"parameter_name": "temperature"},
      {"parameter_name": "tools"}
    ],
    "providers": [
      {
        "name": "openai",
        "status": 0,
        "context_length": 128000
      }
    ]
  }
  // ... more models
]

Get Model by Name

id
string
required
Model identifier (e.g., “gpt-5-mini”, “claude-sonnet-4-5”)
curl https://api.llmadaptive.uk/v1/models/gpt-4 \
  -H "Authorization: Bearer apk_123456"
{
  "model_name": "gpt-4",
  "display_name": "GPT-4",
  "context_length": 128000,
  "pricing": {
    "prompt_cost": "0.00003",
    "completion_cost": "0.00006"
  },
  "supported_parameters": [
    {"parameter_name": "temperature"},
    {"parameter_name": "tools"}
  ],
  "providers": [
    {
      "name": "openai",
      "status": 0,
      "context_length": 128000
    }
  ]
  // ... additional fields omitted for brevity
}

Response Fields

Key fields in the response:
FieldTypeDescription
model_namestringModel identifier (e.g., “gpt-4”, “claude-sonnet-4-5”)
display_namestringHuman-readable model name
context_lengthintegerMaximum context window in tokens
pricing.prompt_coststringCost per input token (as string for precision)
pricing.completion_coststringCost per output token (as string for precision)
supported_parametersarraySupported API parameters (e.g., temperature, tools)
providersarrayAvailable provider endpoints
providers[].namestringProvider name (e.g., “openai”, “anthropic”)
providers[].statusintegerProvider status (0 = active)
Field Type Notes:
  • Pricing values are strings (not numbers) to preserve decimal precision
  • Boolean-like fields use strings ("true" or "false") for compatibility
  • status: 0 means active, non-zero means inactive

Full Model Object

The complete response includes additional fields:
  • id - Database ID (internal)
  • author - Model author/organization
  • description - Model description
  • architecture - Architecture details (modality, tokenizer, etc.)
  • top_provider - Top provider configuration
  • default_parameters - Default parameter values

Pricing Object

  • request_cost - Per-request cost
  • image_cost - Per-image cost
  • web_search_cost - Web search cost
  • internal_reasoning_cost - Internal reasoning cost

Provider Object

Each provider in the providers array includes:
  • endpoint_model_name - Model name at endpoint
  • context_length - Provider-specific context limit
  • max_completion_tokens - Maximum output tokens
  • quantization - Model quantization
  • uptime_last_30m - Recent uptime percentage
  • supports_implicit_caching - Caching support
  • pricing - Provider-specific pricing details

Common Use Cases

Find Models with Specific Features

# Find models with tool calling support
curl "https://api.llmadaptive.uk/v1/models?supported_param=tools" \
  -H "Authorization: Bearer apk_123456"

Find Models with Large Context Windows

# Find models with at least 128K context
curl "https://api.llmadaptive.uk/v1/models?min_context_length=128000" \
  -H "Authorization: Bearer apk_123456"

Get Pricing for Cost Estimation

import requests

headers = {"Authorization": f"Bearer {api_key}"}

# Get model details including pricing
response = requests.get(
    "https://api.llmadaptive.uk/v1/models/gpt-4",
    headers=headers
)
model = response.json()

# Calculate cost for 1000 input tokens and 500 output tokens
input_cost = float(model['pricing']['prompt_cost']) * 1000
output_cost = float(model['pricing']['completion_cost']) * 500
total_cost = input_cost + output_cost

print(f"Estimated cost: ${total_cost:.4f}")

Compare Models Across Providers

import requests

headers = {"Authorization": f"Bearer {api_key}"}

# Get GPT-4 and Claude models
response = requests.get(
    "https://api.llmadaptive.uk/v1/models",
    headers=headers,
    params={"provider": ["openai", "anthropic"]}
)
models = response.json()

# Compare pricing
for model in models:
    print(f"{model['display_name']}: "
          f"${model['pricing']['prompt_cost']}/token input, "
          f"${model['pricing']['completion_cost']}/token output")

Advanced Filtering

The Models API supports advanced filtering with these additional parameters:Filter by Author:
?author=openai&author=anthropic
Filter by Model Name:
?model_name=gpt-4&model_name=claude-3
Filter by Input/Output Modality:
?input_modality=text&input_modality=image
?output_modality=text
Filter by Maximum Cost:
?max_prompt_cost=0.00001
Filter by Status:
?status=0  # 0 = active endpoints only
Filter by Quantization:
?quantization=fp16
Combine Multiple Filters:
# Find vision-capable OpenAI models with large context
curl "https://api.llmadaptive.uk/v1/models?author=openai&supported_param=vision&min_context_length=100000" \
  -H "Authorization: Bearer apk_123456"
Filter Logic:
  • Multiple values for the same parameter use OR logic (e.g., ?author=openai&author=anthropic returns models from OpenAI OR Anthropic)
  • Different parameters use AND logic (e.g., ?author=openai&min_context_length=128000 returns OpenAI models AND with 128K+ context)