Quick Setup

  1. Install LangChain with OpenAI:
pip install langchain langchain-openai
  1. Use Adaptive as your base URL:
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    api_key="your-adaptive-api-key",
    base_url="https://www.llmadaptive.uk/api/v1",
    model="" # Empty string enables intelligent routing
)

response = llm.invoke("Explain machine learning simply")
print(response.content)
That’s it! Your LangChain code now uses intelligent routing.

Common Patterns

Streaming Responses

for chunk in llm.stream("Tell me a story about AI"):
    print(chunk.content, end="", flush=True)

Chains and Memory

from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
    input_variables=["topic"],
    template="Write a brief summary about {topic}"
)

chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(topic="artificial intelligence")
print(result)

Function/Tool Calling

from langchain.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"Weather in {location}: Sunny, 72°F"

# Use with agents or tool-calling chains
tools = [get_weather]
# Your agent setup here...

What You Get

Same LangChain API

All LangChain features work without any changes

Intelligent Routing

Automatic model selection for each request

Cost Optimization

Significant savings through smart provider selection

Provider Transparency

See which AI provider was used in response metadata

Environment Variables

ADAPTIVE_API_KEY=your-adaptive-api-key

Migration from OpenAI

Simply change your base URL - everything else stays the same:
# Before
llm = ChatOpenAI(
    api_key="sk-openai-key",
    # base_url defaults to OpenAI
    model="gpt-4"
)

# After  
llm = ChatOpenAI(
    api_key="your-adaptive-api-key",     # ← New API key
    base_url="https://www.llmadaptive.uk/api/v1", # ← Add this line
    model=""  # ← Empty for intelligent routing
)

Next Steps