Skip to main content

Overview

LangGraph is a low-level orchestration framework for building stateful, multi-actor applications with LLMs. By integrating Adaptive with LangGraph’s ChatOpenAI, you get intelligent model routing while building complex agent workflows with graphs, state management, and tool integration.

Key Benefits

  • Keep existing workflows - No changes to your LangGraph graph structure
  • Intelligent routing - Automatic model selection for each agent interaction
  • Cost optimization - 30-70% cost reduction across agent executions
  • Stateful agents - Works seamlessly with LangGraph’s state management
  • Tool support - Adaptive selects function-calling capable models automatically
  • Streaming support - Real-time responses in agent workflows

Installation

pip install langgraph langchain-openai

Basic Usage

Initialize ChatOpenAI with Adaptive

The only change needed is to point LangGraph’s ChatOpenAI to Adaptive’s endpoint:
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    api_key="your-adaptive-api-key",
    base_url="https://llmadaptive.uk/api/v1",
    model="",  # Empty string enables intelligent routing
    temperature=0,
)

Simple Chatbot with StateGraph

from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_openai import ChatOpenAI

# Initialize model with Adaptive
model = ChatOpenAI(
    api_key="your-adaptive-api-key",
    base_url="https://llmadaptive.uk/api/v1",
    model="",
    temperature=0,
)

# Define the chatbot function
def call_model(state: MessagesState):
    response = model.invoke(state["messages"])
    return {"messages": [response]}

# Create the graph
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_edge(START, "agent")
workflow.add_edge("agent", END)

app = workflow.compile()

# Use the chatbot
result = app.invoke({
    "messages": [{"role": "user", "content": "What is LangGraph?"}]
})

print(result["messages"][-1].content)

Advanced Examples

Agent with Tools

Adaptive automatically selects models that support function calling when tools are detected:
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

# Define tools
@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    # Weather API call
    return f"Weather in {location}: 72°F, sunny"

tools = [get_weather]

# Initialize model with tools
model = ChatOpenAI(
    api_key="your-adaptive-api-key",
    base_url="https://llmadaptive.uk/api/v1",
    model="",
    temperature=0,
).bind_tools(tools)

# Define the agent function
def call_model(state: MessagesState):
    response = model.invoke(state["messages"])
    return {"messages": [response]}

# Routing function
def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

# Create the graph
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", ToolNode(tools))
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")

app = workflow.compile()

# Use the agent
result = app.invoke({
    "messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]
})

Streaming Agent Responses

import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

const app = workflow.compile();

// Stream the agent's responses
const stream = await app.stream({
  messages: [{ role: "user", content: "Write a short poem about AI" }],
});

for await (const chunk of stream) {
  if (chunk.agent && chunk.agent.messages) {
    const message = chunk.agent.messages[0];
    if (message && message.content) {
      process.stdout.write(message.content);
    }
  }
}

Multi-Agent Workflow

import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

// Define custom state with multiple agents
const WorkflowState = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (x, y) => x.concat(y),
  }),
  currentAgent: Annotation<string>({
    reducer: (x, y) => y ?? x,
    default: () => "researcher",
  }),
});

// Initialize different models for different agents
const researcherModel = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0.3,
});

const writerModel = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0.7,
});

// Define agent nodes
async function researcher(state: typeof WorkflowState.State) {
  const response = await researcherModel.invoke([
    { role: "system", content: "You are a research specialist." },
    ...state.messages,
  ]);
  return {
    messages: [response],
    currentAgent: "writer",
  };
}

async function writer(state: typeof WorkflowState.State) {
  const response = await writerModel.invoke([
    { role: "system", content: "You are a creative writer." },
    ...state.messages,
  ]);
  return {
    messages: [response],
    currentAgent: "end",
  };
}

// Router function
function router(state: typeof WorkflowState.State) {
  return state.currentAgent === "writer" ? "writer" : "__end__";
}

// Build the workflow
const workflow = new StateGraph(WorkflowState)
  .addNode("researcher", researcher)
  .addNode("writer", writer)
  .addEdge("__start__", "researcher")
  .addConditionalEdges("researcher", router)
  .addEdge("writer", "__end__");

const app = workflow.compile();

// Execute multi-agent workflow
const result = await app.invoke({
  messages: [{ role: "user", content: "Research and write about quantum computing" }],
});

Integration Patterns

With Memory/Checkpointing

import { MemorySaver } from "@langchain/langgraph";
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

// Add memory for persistent conversations
const memory = new MemorySaver();
const app = workflow.compile({ checkpointer: memory });

// Use with conversation threads
const config = { configurable: { thread_id: "user-123" } };

// First message
await app.invoke(
  { messages: [{ role: "user", content: "My name is Alice" }] },
  config
);

// Follow-up message (remembers context)
const result = await app.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  config
);

Human-in-the-Loop

import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";

const model = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

// Compile with interrupt before agent node for human approval
const memory = new MemorySaver();
const app = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["agent"],
});

const config = { configurable: { thread_id: "conversation-1" } };

// Start the workflow (will pause before agent)
await app.invoke(
  { messages: [{ role: "user", content: "Send an email to the team" }] },
  config
);

// Get current state for human review
const state = await app.getState(config);
console.log("Pending action:", state.values);

// Human approves and continues
await app.invoke(null, config);

Configuration Options

Model Selection

  • Empty string: Intelligent routing (recommended)
  • Specific model: Force a particular model
  • Provider only: Let Adaptive choose best model from provider
// Intelligent routing (recommended)
modelName: ""

// Specific model
modelName: "gpt-4o"

// Provider selection (Adaptive chooses best model)
modelName: "openai"

Temperature and Parameters

All standard ChatOpenAI parameters work with Adaptive:
const model = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0.7,
  maxTokens: 1000,
  topP: 1,
  frequencyPenalty: 0,
  presencePenalty: 0,
});

Best Practices

  1. Use empty model string for intelligent routing across agent nodes
  2. Different temperatures for different agents (research vs creative)
  3. Leverage checkpointing for stateful conversations with memory
  4. Use conditional edges for complex routing logic
  5. Add human-in-the-loop for critical decisions with interrupt points
  6. Tool integration - Adaptive automatically selects function-calling models

Error Handling

import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.ADAPTIVE_API_KEY,
  configuration: {
    baseURL: "https://llmadaptive.uk/api/v1",
  },
  modelName: "",
  temperature: 0,
});

async function callModel(state: typeof MessagesAnnotation.State) {
  try {
    const response = await model.invoke(state.messages);

    // Log which model Adaptive selected
    if (response.response_metadata) {
      const modelName = response.response_metadata.model_name ||
                       response.response_metadata.model ||
                       "unknown";
      console.log(`Adaptive selected: ${modelName}`);
    }

    return { messages: [response] };
  } catch (error: any) {
    if (error.status === 429) {
      console.log("Rate limited, Adaptive will retry with fallback...");
      throw error; // Let Adaptive handle retry
    }

    console.error("Error in agent:", error.message);
    return {
      messages: [{
        role: "assistant",
        content: "I encountered an error. Please try again.",
      }],
    };
  }
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");

const app = workflow.compile();

Complete Example

See the complete LangGraph example for a full working implementation including:
  • Stateful agent with memory
  • Tool integration with conditional routing
  • Multi-agent workflows
  • Streaming responses
  • Human-in-the-loop patterns
  • Error handling

Next Steps

I