Introduction: The Power of Adaptability
Welcome back, future AI architect! In the previous chapters, we got our hands dirty with setting up any-llm and running our first basic LLM calls. We saw how this clever library abstracts away much of the complexity of interacting with large language models. But what if you need to use different LLM providers—say, OpenAI for creative tasks and Mistral for concise summaries—within the same application, or even switch between them dynamically based on user preference or cost?
This is where any-llm truly shines! This chapter is all about unlocking the library’s superpower: dynamic provider switching and advanced configuration. Understanding this is crucial for building robust, cost-effective, and future-proof AI applications. Imagine being able to swap out an expensive cloud provider for a local model when development costs are a concern, or seamlessly failover to a different API if one goes down. That’s the flexibility we’re aiming for!
By the end of this chapter, you’ll be able to:
- Configure
any-llmto recognize multiple LLM providers. - Dynamically select which LLM provider to use at runtime.
- Pass provider-specific parameters for fine-grained control.
- Understand the best practices for managing API keys and settings.
Let’s dive into making our AI applications truly adaptable!
Core Concepts: The Unified Interface and Beyond
At its heart, any-llm provides a brilliant abstraction layer. Instead of writing provider-specific code for OpenAI, then rewriting it for Anthropic, and again for Ollama, you write your logic once, and any-llm translates it for the chosen backend. This not only simplifies your code but also gives you incredible flexibility.
Environment Variables: Your First Line of Defense
The most common and secure way to configure your LLM providers, especially for API keys, is through environment variables. any-llm intelligently detects these variables to know which providers are available to you.
For instance, to use OpenAI, you’d typically set:
export OPENAI_API_KEY="sk-YOUR_OPENAI_KEY"
For Mistral AI:
export MISTRAL_API_KEY="YOUR_MISTRAL_KEY"
And for local models via Ollama (assuming it’s running), you might not need an API key, but its presence signals availability. any-llm looks for these common keys to infer which providers you intend to use.
Why environment variables?
- Security: Keeps sensitive API keys out of your codebase.
- Flexibility: Easily switch keys or even entire providers without changing code.
- Deployment: Standard practice for CI/CD pipelines and production environments.
Programmatic Provider Selection: Taking Control
While environment variables make providers available, any-llm allows you to explicitly choose which one to use within your code. This is done primarily through the provider argument in the completion function.
Let’s visualize how any-llm acts as a central switchboard:
Figure 4.1: any-llm as a unified LLM API gateway
As you can see, your code talks to any-llm, and any-llm routes your request to the appropriate LLM provider based on your instruction. Pretty neat, right?
Provider-Specific Parameters: Fine-Tuning Your Requests
Even though any-llm provides a unified interface, LLM providers often have unique parameters or different naming conventions for common ones (e.g., model names, temperature ranges, top_p values). any-llm gracefully handles this by allowing you to pass these parameters directly. It will intelligently map them or apply them if the underlying provider supports them.
For parameters common across most LLMs (like temperature, max_tokens, seed), any-llm provides a consistent interface. For highly specific parameters, you can often pass them as additional keyword arguments, and any-llm will forward them if the selected provider understands them.
Step-by-Step Implementation: Switching on the Fly
Let’s put these concepts into practice. We’ll demonstrate how to switch between OpenAI and Mistral AI.
Prerequisites:
Before we begin, ensure you have API keys for both OpenAI and Mistral AI (or another combination like OpenAI and Ollama) and that any-llm-sdk is installed with the necessary provider integrations.
# Verify installation for OpenAI and Mistral (or other providers you intend to use)
# As of 2025-12-30, any-llm-sdk is the main package.
pip install 'any-llm-sdk[openai,mistral]'
Then, set your API keys as environment variables:
# Replace with your actual keys!
export OPENAI_API_KEY="sk-YOUR_OPENAI_KEY_HERE"
export MISTRAL_API_KEY="YOUR_MISTRAL_KEY_HERE"
# If using Ollama locally, ensure it's running and you've pulled a model, e.g., 'ollama run llama3'
Step 1: Basic Completion (Implicit Provider)
First, let’s make a simple call. If you have multiple API keys set, any-llm might pick a default based on internal heuristics or the order of detection.
Create a file named dynamic_switch.py:
# dynamic_switch.py
import os
from any_llm import completion
# Ensure API keys are set in your environment
# For example:
# export OPENAI_API_KEY="sk-..."
# export MISTRAL_API_KEY="..."
# --- Part 1: Default/Inferred Provider ---
print("--- Using inferred provider ---")
try:
response_default = completion(
prompt="Tell me a very short, cheerful story about a robot.",
max_tokens=50,
temperature=0.7
)
print(f"Default Provider Response: {response_default.text}\n")
print(f"Provider used: {response_default.provider}\n") # any-llm v1.0 and later provides provider info
except Exception as e:
print(f"Error with default provider: {e}\n")
Explanation:
import os: We’ll use this to check environment variables later, thoughany_llmhandles the detection.from any_llm import completion: Imports the core function.- The
completioncall is straightforward.any-llmwill try to find an available provider based on your environment variables. Theresponse.providerattribute (available inany-llmv1.0+) is super helpful for confirming which provider was actually used.
Run this script:
python dynamic_switch.py
Observe which provider any-llm chose by default.
Step 2: Explicitly Switching to OpenAI
Now, let’s explicitly tell any-llm to use OpenAI.
Add the following code to dynamic_switch.py:
# dynamic_switch.py (continued)
# --- Part 2: Explicitly using OpenAI ---
if os.getenv("OPENAI_API_KEY"):
print("--- Explicitly using OpenAI ---")
try:
response_openai = completion(
prompt="Tell me a very short, cheerful story about a robot, in the style of a classic sci-fi novel.",
provider='openai',
model='gpt-4o-mini', # Using a specific OpenAI model
max_tokens=70,
temperature=0.8
)
print(f"OpenAI Response: {response_openai.text}\n")
print(f"Provider used: {response_openai.provider}\n")
except Exception as e:
print(f"Error with OpenAI provider: {e}\n")
else:
print("OPENAI_API_KEY not set. Skipping OpenAI example.\n")
Explanation:
if os.getenv("OPENAI_API_KEY"):This is a good practice to ensure the necessary API key is present before attempting to use the provider.provider='openai': This is the magic! We’re explicitly tellingany-llmto route this request to OpenAI.model='gpt-4o-mini': We’re also specifying a particular model available from OpenAI.any-llmwill pass this through.temperature=0.8: A common parameter, passed directly.
Run the script again. You should now see output clearly indicating OpenAI as the provider.
Step 3: Explicitly Switching to Mistral AI (or another)
Let’s do the same for Mistral AI.
Add this to dynamic_switch.py:
# dynamic_switch.py (continued)
# --- Part 3: Explicitly using Mistral AI ---
if os.getenv("MISTRAL_API_KEY"):
print("--- Explicitly using Mistral AI ---")
try:
response_mistral = completion(
prompt="Tell me a very short, cheerful story about a robot, focusing on its efficiency and kindness.",
provider='mistral',
model='mistral-large-latest', # Using a specific Mistral model
max_tokens=60,
temperature=0.6
)
print(f"Mistral AI Response: {response_mistral.text}\n")
print(f"Provider used: {response_mistral.provider}\n")
except Exception as e:
print(f"Error with Mistral AI provider: {e}\n")
else:
print("MISTRAL_API_KEY not set. Skipping Mistral AI example.\n")
Explanation:
- Similar to the OpenAI example, we check for the
MISTRAL_API_KEY. provider='mistral': Directs the request to Mistral AI.model='mistral-large-latest': Specifies a Mistral-specific model.- Notice how
temperatureis used consistently, even though the underlying APIs might interpret it slightly differently or have different default ranges.any-llmnormalizes this for you where possible.
Run your dynamic_switch.py one last time. You’ve now successfully performed dynamic provider switching!
Mini-Challenge: The Flexible Greeter
Now it’s your turn to apply what you’ve learned!
Challenge:
Create a Python function called flexible_greeter(name, provider_choice) that takes a person’s name and a provider_choice string (e.g., “openai”, “mistral”, “ollama”) as input. This function should:
- Construct a prompt to generate a personalized greeting for the given
name. - Use
any_llm.completionto get a greeting, dynamically selecting the LLM provider based onprovider_choice. - For OpenAI, set
temperature=0.9. For Mistral, settemperature=0.5. For any other provider, use a defaulttemperature=0.7. - Print the greeting and the name of the provider that generated it.
- Handle potential errors if a provider isn’t available or the
provider_choiceis invalid.
Hint:
- Use an
if/elif/elsestructure to set thetemperatureandproviderarguments based onprovider_choice. - Remember to wrap your
completioncall in atry...exceptblock.
What to Observe/Learn: This challenge will solidify your understanding of programmatic provider selection, conditional parameter passing, and basic error handling, preparing you for more complex, adaptable AI applications.
Common Pitfalls & Troubleshooting
Even with any-llm simplifying things, dynamic switching can introduce a few common hiccups:
Missing or Incorrect API Keys:
- Symptom:
any-llm.exceptions.ProviderInitializationErroror similar authentication errors. - Fix: Double-check your environment variables. Ensure they are correctly named (e.g.,
OPENAI_API_KEY, notOPENAI_KEY) and that the values are correct and active. Restart your terminal or IDE if you’ve just set them, as environment variables are often loaded at session start.
- Symptom:
Invalid
providerName:- Symptom:
ValueError: Unknown provider 'my_openai'or similar. - Fix: Ensure the
providerstring you pass (e.g.,'openai','mistral','ollama') exactly matches the namesany-llmexpects. Refer to theany-llmdocumentation for the canonical provider names.
- Symptom:
Provider-Specific Parameter Mismatches:
- Symptom: An error from the underlying LLM API indicating an unknown parameter, or unexpected behavior. For example, trying to use
n_gpu_layerswith OpenAI. - Fix: Be mindful that while
any-llmunifies common parameters, some are truly unique to a specific LLM or provider. If you’re passing a parameter that’s not universally supported, ensure it’s only passed when that specific provider is active.any-llmwill try its best to ignore unsupported parameters, but sometimes the underlying API will complain.
- Symptom: An error from the underlying LLM API indicating an unknown parameter, or unexpected behavior. For example, trying to use
Summary
Congratulations! You’ve mastered dynamic provider switching and advanced configuration with any-llm. Here are the key takeaways from this chapter:
- Unified Interface:
any-llmprovides a singlecompletionfunction to interact with diverse LLM providers. - Environment Variables: Securely store and manage API keys (e.g.,
OPENAI_API_KEY,MISTRAL_API_KEY) forany-llmto detect available providers. - Programmatic Control: Use the
providerargument incompletion()to explicitly select an LLM backend at runtime, enabling incredible flexibility. - Parameter Handling: Pass common and provider-specific parameters directly to
completion();any-llmhandles the mapping and forwarding. - Best Practices: Always check for environment variables and use
try...exceptblocks for robust error handling, especially when dealing with external APIs.
This newfound ability to seamlessly switch between LLMs empowers you to build more resilient, cost-optimized, and adaptable AI applications. In the next chapter, we’ll dive deeper into Error Handling and Exceptions, ensuring your any-llm applications are not just flexible, but also incredibly robust!
References
- Mozilla AI: any-llm GitHub Repository
- Mozilla AI Blog: Run Any LLM from One API: Introducing any-llm 1.0
- any-llm Official Documentation (GitHub Pages)
- OpenAI API Documentation
- Mistral AI API Documentation
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.