Skip to main content

Get Your Adaptive API Key

Sign up here to create an account and generate your API key.

Overview

CrewAI is a Python framework for orchestrating autonomous AI agents that collaborate to complete complex tasks. By integrating Adaptive with CrewAI, you get intelligent model routing across all your agents while building sophisticated multi-agent systems with role-based architectures.

Key Benefits

  • Keep existing workflows - No changes to your CrewAI crew structure
  • Intelligent routing - Automatic model selection for each agent interaction
  • Cost optimization - 30-70% cost reduction across agent executions
  • Role-based agents - Works seamlessly with CrewAI’s agent roles and tasks
  • Tool support - Adaptive selects function-calling capable models automatically
  • Multi-agent collaboration - Optimized routing for each agent’s specific needs

Installation

pip install crewai crewai-tools

Quick Setup

1. Configure Adaptive as Your LLM Provider

The only change needed is to configure CrewAI’s LLM to use Adaptive’s endpoint:
from crewai import Agent, Task, Crew, LLM

# Initialize LLM with Adaptive
llm = LLM(
    model="",  # Empty string enables intelligent routing
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Create an agent with Adaptive
agent = Agent(
    role='Research Analyst',
    goal='Analyze market trends and provide insights',
    backstory='An expert analyst with years of market research experience',
    llm=llm,
    verbose=True
)

2. Create a Simple Crew

from crewai import Agent, Task, Crew, LLM

# Configure Adaptive LLM
llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Define agents
researcher = Agent(
    role='Researcher',
    goal='Research and gather information on given topics',
    backstory='A thorough researcher with attention to detail',
    llm=llm
)

writer = Agent(
    role='Writer',
    goal='Create engaging content based on research',
    backstory='A creative writer who can transform data into stories',
    llm=llm
)

# Define tasks
research_task = Task(
    description='Research the latest trends in AI technology',
    agent=researcher,
    expected_output='A comprehensive report on AI trends'
)

writing_task = Task(
    description='Write an article based on the research findings',
    agent=writer,
    expected_output='A well-written article about AI trends'
)

# Create and run the crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    verbose=True
)

result = crew.kickoff()
print(result)

Advanced Patterns

Multi-Agent Workflow with Different Models

You can configure different agents with different settings while using Adaptive’s intelligent routing:
from crewai import Agent, Task, Crew, LLM

# Creative agent with higher temperature
creative_llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1",
    temperature=0.9
)

# Analytical agent with lower temperature
analytical_llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1",
    temperature=0.1
)

# Create specialized agents
creative_agent = Agent(
    role='Creative Director',
    goal='Generate innovative ideas and concepts',
    backstory='An innovative thinker with a track record of breakthrough ideas',
    llm=creative_llm
)

analyst_agent = Agent(
    role='Data Analyst',
    goal='Analyze data and provide statistical insights',
    backstory='A detail-oriented analyst with expertise in data science',
    llm=analytical_llm
)

# Define tasks
ideation_task = Task(
    description='Brainstorm 10 innovative product ideas for a tech startup',
    agent=creative_agent,
    expected_output='A list of 10 creative product ideas with descriptions'
)

analysis_task = Task(
    description='Analyze the market potential of each idea',
    agent=analyst_agent,
    expected_output='Market analysis report with viability scores'
)

# Create crew
crew = Crew(
    agents=[creative_agent, analyst_agent],
    tasks=[ideation_task, analysis_task],
    verbose=True
)

result = crew.kickoff()

Agents with Tools

Adaptive automatically selects models that support function calling when tools are used:
from crewai import Agent, Task, Crew, LLM
from crewai_tools import SerperDevTool, WebsiteSearchTool

# Configure Adaptive LLM
llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Initialize tools
search_tool = SerperDevTool()
web_tool = WebsiteSearchTool()

# Create agent with tools
research_agent = Agent(
    role='Research Specialist',
    goal='Conduct thorough research using web tools',
    backstory='An expert researcher who knows how to find information quickly',
    tools=[search_tool, web_tool],
    llm=llm,
    verbose=True
)

# Define task
research_task = Task(
    description='Research the latest developments in quantum computing',
    agent=research_agent,
    expected_output='A detailed report on quantum computing advances'
)

# Create and run crew
crew = Crew(
    agents=[research_agent],
    tasks=[research_task],
    verbose=True
)

result = crew.kickoff()

Sequential and Hierarchical Processes

CrewAI supports different process types, and Adaptive works with all of them:
from crewai import Agent, Task, Crew, Process, LLM

# Configure Adaptive LLM
llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Create agents
planner = Agent(
    role='Project Planner',
    goal='Create comprehensive project plans',
    backstory='An experienced project manager',
    llm=llm
)

developer = Agent(
    role='Developer',
    goal='Implement features based on plans',
    backstory='A skilled software developer',
    llm=llm
)

reviewer = Agent(
    role='Code Reviewer',
    goal='Review and provide feedback on implementations',
    backstory='A senior engineer with high standards',
    llm=llm
)

# Define tasks
planning_task = Task(
    description='Create a plan for building a REST API',
    agent=planner,
    expected_output='A detailed project plan with milestones'
)

development_task = Task(
    description='Outline the implementation approach',
    agent=developer,
    expected_output='Technical implementation details'
)

review_task = Task(
    description='Review the implementation plan',
    agent=reviewer,
    expected_output='Review feedback and recommendations'
)

# Sequential process (default)
crew = Crew(
    agents=[planner, developer, reviewer],
    tasks=[planning_task, development_task, review_task],
    process=Process.sequential,
    verbose=True
)

result = crew.kickoff()

Using Environment Variables

Configure Adaptive using environment variables for cleaner code:
import os
from crewai import Agent, Task, Crew, LLM

# Set environment variables
os.environ["OPENAI_API_KEY"] = "your-adaptive-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.llmadaptive.uk/v1"

# Create LLM (automatically uses environment variables)
llm = LLM(model="")

# Create agents
agent = Agent(
    role='Assistant',
    goal='Help users with their tasks',
    backstory='A helpful AI assistant',
    llm=llm
)
Or in a .env file:
.env
OPENAI_API_KEY=your-adaptive-api-key
OPENAI_API_BASE=https://api.llmadaptive.uk/v1
Python usage
from dotenv import load_dotenv
from crewai import Agent, LLM

load_dotenv()

llm = LLM(model="")

agent = Agent(
    role='Assistant',
    goal='Help users efficiently',
    backstory='An efficient AI assistant',
    llm=llm
)

Configuration Options

Model Selection

  • Empty string (recommended): Enables Adaptive’s intelligent routing
  • Specific model: Forces a particular model (e.g., “gpt-4.1-nano”)
  • Provider only: Let Adaptive choose best model from provider (e.g., “openai”)
# Intelligent routing (recommended)
llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Specific model
llm = LLM(
    model="gpt-4.1-nano",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Provider selection
llm = LLM(
    model="openai",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

Additional LLM Parameters

All standard LLM parameters work with Adaptive:
llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1",
    temperature=0.7,
    max_tokens=1000,
    top_p=0.9,
    frequency_penalty=0.1,
    presence_penalty=0.1,
    stop=["END"],
    seed=42
)

Complete Example: Content Creation Crew

from crewai import Agent, Task, Crew, LLM
from crewai_tools import SerperDevTool

# Configure Adaptive
llm = LLM(
    model="",
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

# Initialize tools
search_tool = SerperDevTool()

# Create specialized agents
researcher = Agent(
    role='Content Researcher',
    goal='Research trending topics and gather relevant information',
    backstory='A skilled researcher who knows how to find the best sources',
    tools=[search_tool],
    llm=llm,
    verbose=True
)

writer = Agent(
    role='Content Writer',
    goal='Write engaging blog posts based on research',
    backstory='A talented writer with years of experience in content creation',
    llm=llm,
    verbose=True
)

editor = Agent(
    role='Editor',
    goal='Review and improve written content',
    backstory='A meticulous editor with an eye for detail',
    llm=llm,
    verbose=True
)

# Define tasks
research_task = Task(
    description='Research the topic: "Benefits of AI in Healthcare"',
    agent=researcher,
    expected_output='A comprehensive research summary with key points and sources'
)

writing_task = Task(
    description='Write a 1000-word blog post about AI in Healthcare based on the research',
    agent=writer,
    expected_output='A well-structured, engaging blog post'
)

editing_task = Task(
    description='Review and edit the blog post for clarity, grammar, and engagement',
    agent=editor,
    expected_output='A polished, publication-ready blog post'
)

# Create and run the crew
content_crew = Crew(
    agents=[researcher, writer, editor],
    tasks=[research_task, writing_task, editing_task],
    verbose=True
)

# Execute the workflow
result = content_crew.kickoff()
print("\n=== Final Result ===")
print(result)

Best Practices

  1. Use empty model string for intelligent routing across all agents
  2. Different temperatures for different agent personalities (creative vs analytical)
  3. Leverage tools - Adaptive automatically selects function-calling capable models
  4. Environment variables for cleaner configuration management
  5. Verbose mode during development to see Adaptive’s model selection
  6. Sequential processes for tasks that depend on each other
  7. Specialized agents with specific roles and backstories for better results

Migration from OpenAI

Simply update your LLM configuration to point to Adaptive:
# Before - Direct OpenAI
from crewai import Agent, LLM

llm = LLM(
    model="gpt-4",
    api_key="sk-openai-key"
)

# After - Through Adaptive
llm = LLM(
    model="",  # Empty for intelligent routing
    api_key="your-adaptive-api-key",
    base_url="https://api.llmadaptive.uk/v1"
)

What You Get

Same CrewAI API

All CrewAI features work without any changes to your crew structure

Intelligent Routing

Automatic model selection optimized for each agent’s role

Cost Optimization

Significant savings through smart provider selection per agent

Tool Support

Automatic selection of function-calling models when tools are used

Error Handling

from crewai import Agent, Task, Crew, LLM

try:
    llm = LLM(
        model="",
        api_key="your-adaptive-api-key",
        base_url="https://api.llmadaptive.uk/v1"
    )

    agent = Agent(
        role='Assistant',
        goal='Complete tasks efficiently',
        backstory='A reliable AI assistant',
        llm=llm
    )

    task = Task(
        description='Summarize recent AI developments',
        agent=agent,
        expected_output='A concise summary'
    )

    crew = Crew(
        agents=[agent],
        tasks=[task],
        verbose=True
    )

    result = crew.kickoff()
    print(result)

except Exception as e:
    print(f"Error: {e}")
    # Adaptive handles provider fallback automatically
    # Check your API key and base_url if errors persist

Next Steps