Skip to main content

Get Your Adaptive API Key

Sign up here to create an account and generate your API key.

Quick Setup

Installation

pip install llama-index-llms-openai llama-index-embeddings-openai

Basic Integration

import os
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings

# Set your Adaptive API key
os.environ["OPENAI_API_KEY"] = os.environ["ADAPTIVE_API_KEY"]

# Initialize OpenAI LLM with Adaptive endpoint
llm = OpenAI(
    model="",  # Empty string enables intelligent routing
    api_base="https://api.llmadaptive.uk/v1",
    api_key=os.environ["ADAPTIVE_API_KEY"],
)

# Set as global LLM
Settings.llm = llm

# Use with simple queries
response = llm.complete("What is retrieval-augmented generation?")
print(response)

RAG Example

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

# Load documents
documents = SimpleDirectoryReader("data").load_data()

# Create index
index = VectorStoreIndex.from_documents(documents)

# Query with intelligent routing
query_engine = index.as_query_engine()
response = query_engine.query("What are the benefits of RAG?")
print(response)

Key Benefits

  • Intelligent routing - Automatic model selection for queries and agents
  • RAG-optimized - Adaptive selects models based on query complexity
  • Cost optimization - 30-70% cost reduction across RAG pipelines
  • Agent support - Smart routing for function-calling agents

Next Steps