Welcome to the World of any-llm!

Hello, future AI architect! Are you ready to streamline your interactions with large language models (LLMs) and free yourself from provider-specific complexities? You’ve come to the right place! In this chapter, we’re going to embark on an exciting journey with any-llm, a powerful Python library developed by Mozilla.ai. It’s designed to give you a single, unified interface to communicate with a multitude of LLM providers, whether they’re running in the cloud or locally on your machine.

By the end of this chapter, you’ll not only understand the core philosophy behind any-llm but also have it successfully installed and make your very first API call. We’ll break down every step into bite-sized pieces, ensuring you build a solid foundation without feeling overwhelmed. Think of any-llm as your universal remote control for all LLMs!

This chapter is designed for absolute beginners to any-llm, assuming you have a basic understanding of Python programming and command-line interfaces. No prior experience with LLM APIs is required – we’ll learn it all together!

Understanding the Core Concept: Why any-llm?

Before we start coding, let’s grasp why any-llm exists and what problem it solves. Imagine you’re building an application that needs to leverage the power of LLMs. Initially, you might choose OpenAI’s GPT models. You write your code, integrate with their API, and everything works great.

But what if you later decide to experiment with Anthropic’s Claude, or perhaps a local model like Mistral running via Ollama, to compare performance, cost, or specific capabilities? Without any-llm, you’d have to rewrite significant portions of your code. Each provider has its own SDK, its own request formats, and its own response structures. This leads to:

  • Vendor Lock-in: Your code becomes tightly coupled to a specific provider.
  • Increased Complexity: Managing multiple SDKs and API variations.
  • Slower Iteration: Switching providers means substantial code changes.

This is where any-llm shines! It acts as an abstraction layer, providing a consistent API that translates your requests into the specific format required by the underlying LLM provider. You write your code once, using the any-llm interface, and then simply change a configuration parameter to switch between providers. How cool is that?

The any-llm Abstraction Layer

Let’s visualize this concept. Your application talks to any-llm, and any-llm handles the complex conversations with various LLM services.

graph TD A["Your Application Code"] -->|"Unified API Call"| B{"any-llm SDK"} B -->|"Translates Request"| C["OpenAI API"] B -->|"Translates Request"| D["Anthropic API"] B -->|"Translates Request"| E["Mistral AI API"] B -->|"Translates Request"| F["Ollama (for Local Models)"] C -->|"Provider Response"| B D -->|"Provider Response"| B E -->|"Provider Response"| B F -->|"Provider Response"| B B -->|"Unified Response"| A

In this diagram, your application always interacts with any-llm (node B). any-llm then decides which LLM provider (C, D, E, F) to send the request to, handles the provider-specific communication, and returns a standardized response back to your application. This unified approach simplifies development and increases flexibility immensely.

Step-by-Step Implementation: Getting Started

Now that we understand the “why,” let’s get our hands dirty with the “how”! We’ll start by setting up your environment and making your very first any-llm call.

Step 1: Prepare Your Python Environment

First things first, you’ll need a healthy Python environment. any-llm is a Python library, and we recommend using a modern Python version and a virtual environment to keep your project dependencies organized.

  1. Check Python Version: Open your terminal or command prompt and type:

    python3 --version
    

    You should see a version number like Python 3.10.x or higher. For any-llm in late 2025, Python 3.10 or 3.11 is generally recommended for best compatibility and performance. If you need to install Python, visit the official Python website.

  2. Create a Virtual Environment: A virtual environment isolates your project’s dependencies from other Python projects. This prevents conflicts and keeps things tidy. Navigate to your desired project directory (or create a new one):

    mkdir any-llm-project
    cd any-llm-project
    

    Now, create the virtual environment. We’ll call it .venv (a common convention):

    python3 -m venv .venv
    
  3. Activate Your Virtual Environment: This step is crucial. Once activated, any Python packages you install will only go into this environment.

    • On macOS/Linux:
      source .venv/bin/activate
      
    • On Windows (Command Prompt):
      .venv\Scripts\activate.bat
      
    • On Windows (PowerShell):
      .venv\Scripts\Activate.ps1
      

    You should see (.venv) or similar at the beginning of your terminal prompt, indicating the environment is active.

Step 2: Install any-llm-sdk

With your virtual environment ready, it’s time to install any-llm. The package is available on PyPI as any-llm-sdk. When installing, you can also specify “extras” – additional dependencies for specific LLM providers. For our first example, let’s install support for Mistral (a popular cloud provider) and Ollama (for local models, which we’ll explore later).

As of December 2025, any-llm-sdk is stable at v1.0.

pip install 'any-llm-sdk[mistral,ollama]'==1.0.0

Wait, what did that command do?

  • pip install: The standard Python package installer.
  • 'any-llm-sdk[mistral,ollama]': This installs the core any-llm-sdk library AND the necessary dependencies to communicate with Mistral AI’s API and Ollama. You can add more providers here (e.g., openai, anthropic) if you plan to use them.
  • ==1.0.0: We explicitly pin the version to 1.0.0 for consistency, as it’s the latest production-ready stable release.

After the installation completes, you’ll see messages about successfully installed packages.

Step 3: Set Up Your API Key (Crucial for Cloud Providers!)

If you’re using a cloud LLM provider like Mistral, you’ll need an API key. Never hardcode API keys directly into your code! The best practice is to use environment variables.

  1. Obtain a Mistral AI API Key: If you don’t have one, head over to the Mistral AI website and sign up to get your API key.

  2. Set the Environment Variable: any-llm automatically looks for API keys in environment variables following a specific pattern (e.g., MISTRAL_API_KEY, OPENAI_API_KEY).

    • On macOS/Linux (for the current session):

      export MISTRAL_API_KEY="YOUR_MISTRAL_API_KEY_HERE"
      

      Replace "YOUR_MISTRAL_API_KEY_HERE" with your actual key. This setting will only last for your current terminal session. For persistent setting, you’d add it to your shell’s configuration file (e.g., ~/.bashrc, ~/.zshrc).

    • On Windows (Command Prompt - for the current session):

      set MISTRAL_API_KEY="YOUR_MISTRAL_API_KEY_HERE"
      
    • On Windows (PowerShell - for the current session):

      $env:MISTRAL_API_KEY="YOUR_MISTRAL_API_KEY_HERE"
      

    Important: For production applications, consider using more robust secrets management solutions, but environment variables are excellent for local development.

Step 4: Your First any-llm API Call!

Now for the exciting part! Let’s write a small Python script to make our first call.

  1. Create a Python file: In your any-llm-project directory, create a new file named first_llm_call.py.

  2. Add the basic code: Open first_llm_call.py and add the following lines:

    import os
    from any_llm import completion
    
    # Step 1: Ensure your API key is set
    # any-llm automatically looks for MISTRAL_API_KEY
    if not os.environ.get("MISTRAL_API_KEY"):
        print("Error: MISTRAL_API_KEY environment variable not set.")
        print("Please set it before running this script.")
        exit(1)
    
    # Step 2: Choose your LLM provider and model
    # We're using Mistral here, but switching is as simple as changing 'provider'
    llm_client = completion.Completion(provider="mistral", model="mistral-tiny")
    
    # Step 3: Define your prompt
    user_prompt = "What is the capital of France?"
    
    print(f"Sending prompt to Mistral: '{user_prompt}'")
    
    # Step 4: Make the API call
    try:
        response = llm_client.create(
            messages=[
                {"role": "user", "content": user_prompt}
            ]
        )
    
        # Step 5: Process and print the response
        if response.choices:
            print("\nLLM Response:")
            print(response.choices[0].message.content)
        else:
            print("No choices returned in the response.")
    
    except Exception as e:
        print(f"An error occurred: {e}")
    

Let’s break down this code, line by line:

  • import os: We import the os module to access environment variables, specifically to check if MISTRAL_API_KEY is set. This is a good practice for robustness.
  • from any_llm import completion: This is the core any-llm import. completion is the module we’ll use for text generation tasks.
  • if not os.environ.get("MISTRAL_API_KEY"): ...: This block checks if the MISTRAL_API_KEY environment variable is present. If not, it prints an error and exits, preventing your script from failing later due to a missing key.
  • llm_client = completion.Completion(provider="mistral", model="mistral-tiny"): This is where we initialize our any-llm client.
    • provider="mistral": This tells any-llm to use the Mistral AI service. If you wanted to use OpenAI, you’d change this to provider="openai" (assuming you have openai extras installed and OPENAI_API_KEY set).
    • model="mistral-tiny": This specifies which specific model from Mistral we want to use. mistral-tiny is a good choice for quick, cost-effective initial tests.
  • user_prompt = "What is the capital of France?": Our simple question for the LLM.
  • print(f"Sending prompt to Mistral: '{user_prompt}'"): A helpful print statement to show what’s happening.
  • try...except Exception as e:: This is a basic error handling block. API calls can fail due to network issues, invalid keys, rate limits, etc. This ensures our script doesn’t crash unexpectedly.
  • response = llm_client.create(messages=[{"role": "user", "content": user_prompt}]): This is the actual API call!
    • llm_client.create(): The method to generate a completion.
    • messages=[{"role": "user", "content": user_prompt}]: This is the standard chat-like message format. The role specifies who is speaking (user, system, or assistant), and content is their message. any-llm standardizes this across providers.
  • if response.choices: print(response.choices[0].message.content): any-llm returns a response object. We access the first choice (LLMs can sometimes generate multiple alternatives) and then its message.content to get the generated text.
  1. Run Your Script: Save the first_llm_call.py file. Make sure your virtual environment is still active and your MISTRAL_API_KEY is set. Then, run:

    python first_llm_call.py
    

    You should see output similar to this:

    Sending prompt to Mistral: 'What is the capital of France?'
    
    LLM Response:
    The capital of France is Paris.
    

    Congratulations! You’ve successfully made your first LLM call using any-llm!

Mini-Challenge: Explore a Different Model

You’ve just seen how easy it is to use any-llm. Now, let’s put your understanding to the test.

Challenge: Modify your first_llm_call.py script to ask a different question and use a different Mistral model.

  • Change user_prompt to something like “Tell me a short, funny fact about cats.”
  • Change model="mistral-tiny" to model="mistral-small". This model is generally more capable than mistral-tiny.

Hint: You only need to change two lines in your existing script!

What to observe/learn: Notice how changing the model is simply a matter of updating the model parameter. The rest of your code (llm_client.create, message format, response parsing) remains identical, showcasing the power of any-llm’s unified interface. You might also observe subtle differences in response quality or speed with the different model.

Common Pitfalls & Troubleshooting

Even with careful steps, you might encounter issues. Here are a few common ones and how to troubleshoot them:

  1. Error: MISTRAL_API_KEY environment variable not set.

    • Cause: You forgot to set the environment variable, or you set it in a different terminal session that’s no longer active.
    • Solution: Re-run the export MISTRAL_API_KEY="..." (or set/$env:) command in your current terminal session before running the Python script. Double-check that there are no typos in the key or the variable name.
  2. ModuleNotFoundError: No module named 'any_llm' or No module named 'mistralai'

    • Cause: Your virtual environment is not active, or any-llm-sdk (or its extras) wasn’t installed correctly.
    • Solution:
      • Ensure your virtual environment is active (you should see (.venv) in your prompt). If not, activate it (source .venv/bin/activate).
      • Re-run the installation command: pip install 'any-llm-sdk[mistral,ollama]'==1.0.0.
  3. An error occurred: HTTPError: 401 Client Error: Unauthorized for url: ...

    • Cause: Your API key is incorrect or invalid.
    • Solution: Double-check your MISTRAL_API_KEY for typos. Generate a new key from the Mistral AI dashboard if you suspect the current one is revoked or expired.
  4. Slow or No Response:

    • Cause: Network issues, temporary service outages with the LLM provider, or hitting rate limits.
    • Solution: Check your internet connection. Visit the Mistral AI status page (if available) to see if there are ongoing issues. For rate limits, you might need to wait or implement retry logic (we’ll cover this in a later chapter!).

Summary: Your First Steps with any-llm

You’ve just completed a significant first step in mastering any-llm! Let’s recap what we’ve covered:

  • What is any-llm? A unified Python interface for interacting with various LLM providers, simplifying development and provider switching.
  • Why use it? Reduces vendor lock-in, streamlines code, and makes experimenting with different models effortless.
  • Installation: We learned to set up a Python virtual environment and installed any-llm-sdk with specific provider extras (pip install 'any-llm-sdk[mistral,ollama]'==1.0.0).
  • API Keys: The importance of using environment variables (e.g., MISTRAL_API_KEY) for secure credential management.
  • First API Call: You successfully wrote and executed a Python script to send a prompt to Mistral AI using any-llm’s completion interface.
  • Unified Interface: We saw how the completion.Completion() object and create() method provide a consistent way to interact, regardless of the underlying provider.

In the next chapter, we’ll dive deeper into any-llm’s core API concepts, exploring different parameters, handling more complex prompts, and understanding the structure of the responses. Get ready to unlock even more power!

References


This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.