Overview
The Models API provides access to Adaptive’s comprehensive model registry, which contains detailed information about available LLM models including pricing, capabilities, context limits, and provider details. Use this API to:- Discover available models across all providers
- Get detailed pricing and capability information
- Filter models by provider
- Retrieve specific model details for integration
Registry Model System
Adaptive maintains a centralized Model Registry that tracks comprehensive information about LLM models from multiple providers (OpenAI, Anthropic, Google, DeepSeek, Groq, and more).What is a Registry Model?
A Registry Model is a comprehensive data structure containing:- Identity: Provider, model name, OpenRouter ID
- Pricing: Input/output token costs, per-request costs
- Capabilities: Context length, supported parameters, tool calling support
- Architecture: Modality, tokenizer, instruction format
- Provider Info: Top provider configuration, available endpoints
- Metadata: Display name, description, timestamps
How the Registry Works
- Centralized Data Source: The registry service maintains up-to-date model information
- Automatic Lookups: When you specify a provider or model, Adaptive queries the registry
- Auto-Fill: Known models automatically get pricing and capability data filled in
Endpoints
List All Models
Optional provider filter (e.g., “openai”, “anthropic”, “google”)
Get Model by Name
Model identifier (e.g., “gpt-5-mini”, “claude-sonnet-4-5”)
Response Schema
RegistryModel Object
| Field | Type | Description |
|---|---|---|
id | integer | Database ID (internal use) |
openrouter_id | string | OpenRouter model identifier (primary lookup key) |
provider | string | Provider name (e.g., “openai”, “anthropic”) |
model_name | string | Model identifier for API calls |
display_name | string | Human-readable model name |
description | string | Model description and use cases |
context_length | integer | Maximum context window size in tokens |
pricing | object | Pricing information (see below) |
architecture | object | Model architecture details (see below) |
top_provider | object | Top provider configuration (see below) |
supported_parameters | array | Supported API parameters |
default_parameters | object | Default parameter values |
endpoints | array | Available provider endpoints (see below) |
created_at | string | Creation timestamp (ISO 8601) |
last_updated | string | Last update timestamp (ISO 8601) |
Pricing Object
| Field | Type | Description |
|---|---|---|
prompt | string | Cost per input token (USD, string format) |
completion | string | Cost per output token (USD, string format) |
request | string | Cost per request (optional) |
image | string | Cost per image (optional) |
web_search | string | Cost for web search (optional) |
Architecture Object
| Field | Type | Description |
|---|---|---|
modality | string | Input/output modality (e.g., “text->text”, “text+image->text”) |
input_modalities | array | Supported input types |
output_modalities | array | Supported output types |
tokenizer | string | Tokenizer used (e.g., “cl100k_base”, “o200k_base”) |
instruct_type | string | Instruction format (e.g., “chatml”, null) |
TopProvider Object
| Field | Type | Description |
|---|---|---|
context_length | integer | Provider’s context limit |
max_completion_tokens | integer | Maximum output tokens |
is_moderated | boolean | Whether content is moderated |
Endpoint Object
| Field | Type | Description |
|---|---|---|
name | string | Full endpoint name |
model_name | string | Display model name |
context_length | integer | Context length for this endpoint |
pricing | object | Endpoint-specific pricing |
provider_name | string | Provider name |
tag | string | Provider tag/slug |
max_completion_tokens | integer | Max completion tokens |
supported_parameters | array | Supported parameters |
status | integer | Status code (0 = active) |
supports_implicit_caching | boolean | Implicit caching support |
Common Use Cases
1. Discover Available Models
Query all models to see what’s available:2. Compare Pricing Across Providers
Find the cheapest model for your use case:3. Check Tool Calling Support
Find models that support function calling:4. Get Provider-Specific Models
List all models from a specific provider:5. Validate Model Before Using
Check if a model exists and get its capabilities:Integration with Other APIs
Use with Chat Completions
Combine with the Chat Completions API for intelligent routing:Use with Select Model API
Combine with the Select Model API for explicit selection:Best Practices
1. Cache Registry Data
Cache model information to reduce API calls:2. Handle Registry Failures Gracefully
Always have fallback options:3. Filter by Capabilities
Select models based on your requirements:Error Handling
| Status Code | Description | Solution |
|---|---|---|
| 200 | Success | Process returned models |
| 400 | Bad Request | Check model ID parameter |
| 404 | Not Found | Model doesn’t exist in registry |
| 502 | Bad Gateway | Registry service unavailable, use fallback |
Related Documentation
- Model Specification Reference - Complete field documentation
- Chat Completions API - Use models in chat completions
- Select Model API - Intelligent model selection
- Intelligent Routing - How routing uses registry data



