Skip to main content
Configure Gemini CLI to use Adaptive’s intelligent routing for 60-80% cost savings while gaining access to multiple AI providers beyond just Google’s Gemini models.
Save 60-80% on AI development costs with intelligent model routing. Works as a drop-in replacement for Gemini CLI’s API backend. Access Anthropic Claude, OpenAI, and other providers through a unified interface with automatic load balancing and fallbacks.

Benefits of Using Gemini CLI with Adaptive

When you integrate Gemini CLI with Adaptive, you unlock powerful capabilities:

Developer Benefits

  • Universal Model Access: Use any LiteLLM-supported model (Anthropic Claude, OpenAI GPT-4, Vertex AI, Bedrock, etc.) through the Gemini CLI interface
  • Higher Rate Limits & Reliability: Load balance across multiple models and providers to avoid hitting individual provider limits
  • Automatic Fallbacks: Get responses even if one provider fails—Adaptive automatically routes to the next available model
  • Cost Optimization: Intelligent routing selects the most cost-effective model for each request

Admin Benefits

  • Centralized Management: Control access to all models through a single Adaptive proxy without giving developers API keys to each provider
  • Budget Controls: Set spending limits and track costs across all Gemini CLI usage
  • Usage Analytics: Monitor model usage, costs, and performance in real-time

Get Your Adaptive API Key

Visit llmadaptive.uk to create an account and generate your API key.

Quick Setup

Run Automated Installer

curl -fsSL https://raw.githubusercontent.com/Egham-7/adaptive/main/scripts/installers/gemini-cli.sh | bash
The installer will automatically:
  • Install Gemini CLI if not present (via npm)
  • Configure environment variables for Adaptive routing
  • Add configuration to your shell profile (~/.bashrc, ~/.zshrc, etc.)
  • Verify the installation

Verify Configuration

gemini --version
echo $GEMINI_API_KEY
echo $GOOGLE_GEMINI_BASE_URL

Start Using with Multi-Provider Access

gemini "explain quantum computing"
Adaptive will automatically route your request to the optimal model across all available providers.

Manual Installation

If you prefer to set up Gemini CLI manually or need more control over the installation process:

Step 1: Install Gemini CLI

npm install -g @google/gemini-cli

Step 2: Configure Environment Variables

# Gemini CLI with Adaptive LLM API Configuration
export GEMINI_API_KEY="your-adaptive-api-key-here"
export GOOGLE_GEMINI_BASE_URL="https://www.llmadaptive.uk/api/v1beta"

Step 3: Apply Configuration

# For Bash/Zsh
source ~/.bashrc  # or ~/.zshrc

# For Fish
source ~/.config/fish/config.fish

# Or restart your terminal

Step 4: Verify Installation

gemini --version
gemini "test connection"

Alternative Setup Methods

export ADAPTIVE_API_KEY='your-api-key-here'
curl -fsSL https://raw.githubusercontent.com/Egham-7/adaptive/main/scripts/installers/gemini-cli.sh | bash
# The installer will automatically configure your shell

Advanced Configuration

Using Multi-Provider Model Routing

With Adaptive, you can configure Gemini CLI to use models from multiple providers:
# Use Claude models through Gemini CLI
export ADAPTIVE_MODEL='claude-sonnet-4-20250514'
gemini "write a Python function"

Model Group Aliases

For advanced use cases, configure model aliases in your Adaptive proxy to route Gemini model requests to any provider:
proxy_config.yaml
model_list:
  - model_name: claude-sonnet-4-20250514
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: os.environ/ANTHROPIC_API_KEY

  - model_name: gpt-4o-latest
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY

router_settings:
  model_group_alias:
    "gemini-2.5-pro": "claude-sonnet-4-20250514"
    "gemini-2.5-flash": "gpt-4o-latest"
With this configuration:
  • Requests for gemini-2.5-pro → routed to Claude Sonnet
  • Requests for gemini-2.5-flash → routed to GPT-4o
  • Automatic load balancing and fallbacks across providers

Load Balancing Configuration

Configure load balancing across multiple models for higher throughput:
proxy_config.yaml
model_list:
  - model_name: claude-sonnet
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: os.environ/ANTHROPIC_API_KEY

  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY

  - model_name: gemini-pro
    litellm_params:
      model: google/gemini-2.5-pro
      api_key: os.environ/GOOGLE_API_KEY

router_settings:
  model_group_alias:
    "gemini-2.5-pro": ["claude-sonnet", "gpt-4o", "gemini-pro"]
Benefits:
  • Higher rate limits: Distribute requests across multiple providers
  • Automatic failover: If one provider is down, requests route to others
  • Cost optimization: Route to the most cost-effective available model

Usage Examples

# Start interactive chat session
gemini

# Or with a specific prompt
gemini "help me debug this code"

Troubleshooting

Problem: Gemini CLI installation failsSolutions:
  • Ensure Node.js 18+ is installed: node --version
  • Install Node.js if needed:
    # Using nvm (recommended)
    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
    nvm install 22
    
  • Check npm permissions: npm config get prefix
  • Try with sudo (not recommended): sudo npm install -g @google/gemini-cli
Problem: “Unauthorized” or “Invalid API key” errorsSolutions:
  1. Verify your API key at llmadaptive.uk/dashboard
  2. Check environment variables are set:
    echo $GEMINI_API_KEY
    echo $GOOGLE_GEMINI_BASE_URL
    
  3. Ensure variables are exported in your shell config:
    # Bash/Zsh
    source ~/.bashrc  # or ~/.zshrc
    
    # Fish
    source ~/.config/fish/config.fish
    
  4. Restart your terminal if changes were made to shell config
  5. Verify the base URL is correct: https://www.llmadaptive.uk/api/v1
Problem: Cannot connect to Adaptive APISolutions:
  • Check internet connectivity
  • Verify base URL is correct: echo $GOOGLE_GEMINI_BASE_URL
  • Test API directly:
    curl -X POST https://www.llmadaptive.uk/api/v1/chat/completions \
      -H "Authorization: Bearer $GEMINI_API_KEY" \
      -H "Content-Type: application/json" \
      -d '{
        "model": "gemini-2.5-pro",
        "messages": [{"role": "user", "content": "Hello"}]
      }'
    
  • Check if your network/firewall blocks the API endpoint
Problem: Requests not routing to expected modelsSolutions:
  1. Check if model alias is configured (if using advanced routing)
  2. Verify your Adaptive proxy configuration
  3. Review model names in your requests
  4. Check Adaptive dashboard for routing logs
  5. Clear the ADAPTIVE_MODEL environment variable for intelligent routing:
    unset ADAPTIVE_MODEL
    
Problem: Slow response times or timeoutsSolutions:
  • Check Adaptive dashboard for provider status
  • Verify rate limits aren’t exceeded
  • Consider using load balancing across multiple providers
  • Check your internet connection speed
  • Review model selection—some models are faster than others

Uninstallation

If you need to remove Gemini CLI or revert to Google’s API:
1

Remove Gemini CLI

npm uninstall -g @google/gemini-cli
2

Remove Environment Variables

Edit your shell config file and remove these lines:
# Gemini CLI with Adaptive LLM API Configuration
export GEMINI_API_KEY="..."
export GOOGLE_GEMINI_BASE_URL="..."
3

Reload Shell Configuration

source ~/.bashrc  # or ~/.zshrc or ~/.config/fish/config.fish

Next Steps


Was this page helpful? Contact us at info@llmadaptive.uk for feedback or assistance with your Gemini CLI integration.