Overview

Use Adaptive as your OpenAI-compatible base URL in LangChain to enable intelligent routing, streaming, and cost optimizations.

Installation

pip install langchain langchain-openai

Quick Start

Python

from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key="ADAPTIVE_KEY", base_url="https://llmadaptive.uk/api/v1", model="")
response = llm.invoke("Explain ML simply")
print(response.content)

JavaScript

import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
  apiKey: "ADAPTIVE_KEY",
  baseURL: "https://llmadaptive.uk/api/v1",
  model: "",
});
const res = await llm.invoke("Explain quantum computing");
console.log(res.content);

Advanced Usage

Streaming

for chunk in llm.stream("Tell me a story"):
    print(chunk.content, end="")

Cost & Provider Constraints

llm = ChatOpenAI(..., model_kwargs={"provider_constraint":["openai","anthropic"], "cost_bias":0.2})

Extras

  • Function calling, tools, chains, memory support — works out of the box.
  • For embeddings, use OpenAI’s API (embeddings coming soon!).

Migration

Simply switch:
- model="gpt-3.5-turbo"
+ model=""           # for Adaptive

Best Practices

TipDescription
Leave model=""Enables intelligent routing
Use cost_biasAdjusts performance vs cost
Add provider_constraintControls which API providers are used
Handle errorsWrap calls in try/catch or retries