Query the model registry to discover available LLM models and their capabilities
model_name, display_name, context_length, pricing, and available providers.128000 for 128K tokens)?supported_param=tools for tool calling support?author=openai&author=anthropic?author=openai,anthropic (comma-separated not supported)| Field | Type | Description |
|---|---|---|
model_name | string | Model identifier (e.g., “gpt-4”, “claude-sonnet-4-5”) |
display_name | string | Human-readable model name |
context_length | integer | Maximum context window in tokens |
pricing.prompt_cost | string | Cost per input token (as string for precision) |
pricing.completion_cost | string | Cost per output token (as string for precision) |
supported_parameters | array | Supported API parameters (e.g., temperature, tools) |
providers | array | Available provider endpoints |
providers[].name | string | Provider name (e.g., “openai”, “anthropic”) |
providers[].status | integer | Provider status (0 = active) |
"true" or "false") for compatibilitystatus: 0 means active, non-zero means inactiveComplete Response Schema
id - Database ID (internal)author - Model author/organizationdescription - Model descriptionarchitecture - Architecture details (modality, tokenizer, etc.)top_provider - Top provider configurationdefault_parameters - Default parameter valuesrequest_cost - Per-request costimage_cost - Per-image costweb_search_cost - Web search costinternal_reasoning_cost - Internal reasoning costproviders array includes:endpoint_model_name - Model name at endpointcontext_length - Provider-specific context limitmax_completion_tokens - Maximum output tokensquantization - Model quantizationuptime_last_30m - Recent uptime percentagesupports_implicit_caching - Caching supportpricing - Provider-specific pricing detailsAdditional Filter Parameters
?author=openai&author=anthropic returns models from OpenAI OR Anthropic)?author=openai&min_context_length=128000 returns OpenAI models AND with 128K+ context)