Welcome back, intrepid testers! In our previous chapters, you mastered the art of spinning up individual containers for your integration tests. You learned how to get a database running, connect to it, and ensure your application logic works against a real dependency. That’s a huge leap from relying on fragile mocks!

But what happens when your application isn’t just talking to one database? What if it’s a microservice interacting with another microservice, a message broker, and a database? In the real world, applications often live in a complex ecosystem of services, all needing to communicate with each other. Testing such interconnected systems requires more than just isolated containers.

This chapter is your deep dive into the fascinating world of advanced networking and container linking within Testcontainers. We’ll explore how to orchestrate multiple containers, enable them to talk to each other, and replicate realistic multi-service environments for your tests. By the end, you’ll be able to confidently test intricate application stacks, building robust integration tests that truly mirror your production setup. Get ready to level up your Testcontainers game!

Core Concepts: The Dance of Connected Containers

Before we jump into code, let’s understand why and how containers communicate, both natively in Docker and through the Testcontainers library.

The Need for Inter-Container Communication

Imagine a simple scenario:

  1. Your UserService (a microservice) needs to store user data in a PostgreSQL database.
  2. Your NotificationService needs to publish events to an Apache Kafka message broker when a user registers.

To integration test these services effectively, you can’t just run the UserService and mock PostgreSQL. You need a real PostgreSQL container. And for NotificationService, you need a real Kafka container. Furthermore, if you’re testing an API gateway that routes requests to UserService and NotificationService, all three (API Gateway, UserService, NotificationService) might need to be running in containers and communicating.

This is where the magic of Docker networking comes in!

Docker Networking in a Nutshell

Docker containers, by default, are isolated. However, Docker provides powerful networking features to allow containers to communicate.

  1. Bridge Network (Default): When you run a container without specifying a network, it connects to the default bridge network. Containers on this network can talk to each other if you use their IP addresses, but this is cumbersome.

  2. User-Defined Networks: This is where the real power lies! When you create a user-defined network and attach multiple containers to it, Docker provides built-in DNS resolution. This means containers on the same user-defined network can resolve each other by their container names. This is the cornerstone of how Testcontainers facilitates inter-container communication.

    Why are user-defined networks better than the default bridge?

    • DNS Resolution: As mentioned, containers can find each other by name, making configurations simpler.
    • Isolation: User-defined networks provide better isolation from other containers on the Docker host.
    • Configurability: You can configure subnets, gateways, and more.

Testcontainers and Network Management

Testcontainers cleverly leverages Docker’s user-defined networks to simplify complex test setups.

When you start a GenericContainer (or any specialized container like PostgreSQLContainer), Testcontainers will by default create a temporary, dedicated user-defined network for that test run. If you then start another container and tell Testcontainers to attach it to the same network, they’ll be able to communicate.

The key is that Testcontainers makes it easy to:

  • Create a Network: You can explicitly create a network object.
  • Attach Containers: You then tell each container to join this specific network.
  • Resolve by Name: Inside your application container, you’ll configure it to connect to other service containers (like a database or message broker) using their container names as hostnames. Testcontainers automatically assigns a unique name to each container it starts.

Let’s visualize this process:

flowchart TD A[Test Run Starts] --> B[Testcontainers Library] B --> C{Create Shared Test Network} C -->|\1| D[App Container] C -->|\1| E[Database Container] C -->|\1| F[Message Broker Container] D -->|\1| E D -->|\1| F F --> G[Test Assertions] G --> H[Cleanup Network & Containers]

Container Linking: A Modern Approach

You might encounter the term “container linking” or --link in older Docker documentation or tutorials. This was a legacy feature of the docker run command for connecting containers on the default bridge network. It’s now considered deprecated and less flexible than user-defined networks.

Testcontainers, by design, focuses on the modern approach: connecting containers to a shared user-defined network. This is more robust, scalable, and aligns with Docker’s recommended practices.

Docker Compose Integration for Complex Stacks

For truly complex microservice architectures, defining each GenericContainer individually in your test code can become verbose and hard to maintain. This is especially true if your development or production environment already uses docker-compose.yml to define your entire service stack.

Good news! Testcontainers has excellent support for Docker Compose. The DockerComposeContainer (available in Java, Node.js, and Python) allows you to:

  1. Point Testcontainers to your existing docker-compose.yml file.
  2. Testcontainers will then spin up all services defined in that file.
  3. It handles network setup, container naming, and port exposures automatically according to your Compose file.

This is a fantastic way to achieve high-fidelity integration tests, as your test environment can be an exact replica of your local development or even production environment, defined by the same docker-compose.yml.

Step-by-Step Implementation: Building a Multi-Service Test

Let’s put these concepts into practice. We’ll build a simple scenario:

  • A lightweight HTTP application (our MyApp).
  • A PostgreSQL database that MyApp connects to.

Our MyApp will be a generic service that tries to establish a connection to a PostgreSQL instance. The test will verify that MyApp can successfully connect to the PostgreSQLContainer when both are on the same Testcontainers-managed network.

Prerequisites:

  • Docker installed and running.
  • Your favorite IDE.
  • For Java: Maven or Gradle. Testcontainers for Java (latest stable, e.g., 1.19.7 as of 2026-02-14).
  • For JavaScript/TypeScript: Node.js and npm/yarn. @testcontainers/testcontainers (latest stable, e.g., 9.11.0 as of 2026-02-14).
  • For Python: Python 3.8+. pytest and testcontainers-python (latest stable, e.g., 4.14.1 as of 2026-02-14).

1. Java Example: Multi-Service Testing with Testcontainers

First, let’s create a placeholder “application” that attempts to connect to a database. For simplicity, this won’t be a full Spring Boot app, but a basic Java utility that we’ll containerize.

Project Setup (Maven): Create a new Maven project. Add the following dependencies to your pom.xml:

<!-- pom.xml -->
<dependencies>
    <dependency>
        <groupId>org.testcontainers</groupId>
        <artifactId>testcontainers</artifactId>
        <version>1.19.7</version> <!-- As of 2026-02-14, use latest stable -->
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.testcontainers</groupId>
        <artifactId>postgresql</artifactId>
        <version>1.19.7</version> <!-- As of 2026-02-14, use latest stable -->
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter-api</artifactId>
        <version>5.10.1</version> <!-- Latest stable -->
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter-engine</artifactId>
        <version>5.10.1</version> <!-- Latest stable -->
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.postgresql</groupId>
        <artifactId>postgresql</artifactId>
        <version>42.7.1</version> <!-- Latest stable -->
        <scope>test</scope>
    </dependency>
</dependencies>

Step 1: Create a simple Dockerfile for our ‘App’ In your src/test/resources directory (or anywhere accessible by your test), create a directory myapp-docker and inside it, a Dockerfile.

# src/test/resources/myapp-docker/Dockerfile
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
COPY target/myapp-runner.jar .
EXPOSE 8080
CMD ["java", "-jar", "myapp-runner.jar"]

We’ll also need a dummy myapp-runner.jar. For our test, we’ll build a very simple Java application that just tries to connect to a PostgreSQL database on startup and exits. This will prove network connectivity.

Step 2: Create the Dummy Java Application (MyAppRunner.java) Create this in src/main/java/com/example/MyAppRunner.java.

// src/main/java/com/example/MyAppRunner.java
package com.example;

import java.sql.Connection;
import java.sql.DriverManager;
import java.util.logging.Logger;

public class MyAppRunner {
    private static final Logger LOGGER = Logger.getLogger(MyAppRunner.class.getName());

    public static void main(String[] args) {
        String dbHost = System.getenv("DB_HOST");
        String dbPort = System.getenv("DB_PORT");
        String dbName = System.getenv("DB_NAME");
        String dbUser = System.getenv("DB_USER");
        String dbPass = System.getenv("DB_PASSWORD");

        if (dbHost == null || dbPort == null || dbName == null || dbUser == null || dbPass == null) {
            LOGGER.severe("Missing required environment variables for DB connection.");
            System.exit(1);
        }

        String jdbcUrl = String.format("jdbc:postgresql://%s:%s/%s", dbHost, dbPort, dbName);
        LOGGER.info("Attempting to connect to: " + jdbcUrl);

        try (Connection connection = DriverManager.getConnection(jdbcUrl, dbUser, dbPass)) {
            LOGGER.info("Successfully connected to PostgreSQL database!");
            // Optionally, run a simple query
            connection.createStatement().execute("SELECT 1");
            LOGGER.info("Successfully executed a test query.");
            System.exit(0); // Success!
        } catch (Exception e) {
            LOGGER.severe("Failed to connect to PostgreSQL: " + e.getMessage());
            e.printStackTrace();
            System.exit(1); // Failure
        }
    }
}

This tiny app expects database connection details via environment variables. It will try to connect and then exit. We’ll use this behavior to determine if the network linking was successful.

Step 3: Package the Dummy App (Maven) You’ll need to package MyAppRunner.java into a runnable JAR. Add the following to your pom.xml within the <build> section:

<!-- pom.xml, inside <build><plugins> -->
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <version>3.3.0</version> <!-- Latest stable -->
    <configuration>
        <archive>
            <manifest>
                <addClasspath>true</addClasspath>
                <mainClass>com.example.MyAppRunner</mainClass>
            </manifest>
        </archive>
        <finalName>myapp-runner</finalName>
    </configuration>
</plugin>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>3.6.1</version> <!-- Latest stable -->
    <executions>
        <execution>
            <id>copy-dependencies</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>copy-dependencies</goal>
            </goals>
            <configuration>
                <outputDirectory>${project.build.directory}/lib</outputDirectory>
                <overWriteReleases>false</overWriteReleases>
                <overWriteSnapshots>false</overWriteSnapshots>
                <overWriteIfNewer>true</overWriteIfNewer>
            </configuration>
        </execution>
    </executions>
</plugin>

After saving MyAppRunner.java and updating pom.xml, run mvn clean package from your project root. This will create target/myapp-runner.jar.

Step 4: Write the Java Test (MultiServiceTest.java) Now, let’s create our JUnit 5 test.

// src/test/java/com/example/MultiServiceTest.java
package com.example;

import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.Network;
import org.testcontainers.containers.PostgreSQLContainer;
import org.testcontainers.images.builder.ImageFromDockerfile;
import org.testcontainers.utility.DockerImageName;

import java.nio.file.Path;

import static org.assertj.core.api.Assertions.assertThat;

public class MultiServiceTest {

    // 1. Define a shared network
    private static Network network = Network.newNetwork();

    // 2. Define the PostgreSQL container
    private static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>(DockerImageName.parse("postgres:15-alpine"))
            .withDatabaseName("testdb")
            .withUsername("testuser")
            .withPassword("testpass")
            .withNetwork(network) // Attach to our shared network
            .withNetworkAliases("postgres-db"); // Give it an alias for the app to use

    // 3. Define our custom application container
    private static GenericContainer<?> app;

    @BeforeAll
    static void setup() {
        // Build the app image from Dockerfile
        ImageFromDockerfile appImage = new ImageFromDockerfile()
                .withDockerfile(Path.of("src/test/resources/myapp-docker/Dockerfile"))
                .withFileFromPath("target/myapp-runner.jar", Path.of("target/myapp-runner.jar")); // Path to our built JAR

        app = new GenericContainer<>(appImage)
                .withExposedPorts(8080) // Expose app port if needed, but not strictly for this test
                .withNetwork(network) // Attach to the SAME shared network
                .dependsOn(postgres) // Ensure postgres starts first
                .withEnv("DB_HOST", "postgres-db") // Use the network alias as hostname!
                .withEnv("DB_PORT", "5432")
                .withEnv("DB_NAME", "testdb")
                .withEnv("DB_USER", "testuser")
                .withEnv("DB_PASSWORD", "testpass")
                .withLogConsumer(outputFrame -> System.out.print(outputFrame.getUtf8String())); // Print app logs

        // Start both containers
        postgres.start();
        app.start();
    }

    @AfterAll
    static void cleanup() {
        app.stop();
        postgres.stop();
        network.close(); // Don't forget to close the network!
    }

    @Test
    void appShouldConnectToPostgres() {
        // Our app exits with 0 on success, 1 on failure.
        // We can check the exit code of the container.
        // For a real app, you might hit an API endpoint or check logs for a specific message.
        // For this example, waiting for the container to stop is enough.
        app.waitingFor(new org.testcontainers.containers.wait.strategy.WaitStrategy() {
            @Override
            public void waitUntilReady(org.testcontainers.containers.wait.strategy.WaitContext waitContext) {
                // Wait for the app container to stop, then check its exit code
                waitContext.getContainer().getDockerClient().waitContainerCmd(waitContext.getContainer().getContainerId()).start().awaitStatusCode();
            }
        });
        assertThat(app.getCurrentContainerInfo().getState().getExitCode())
                .as("App container should exit with code 0 indicating successful DB connection")
                .isEqualTo(0);
        assertThat(app.getLogs()).contains("Successfully connected to PostgreSQL database!");
    }
}

Explanation:

  1. Network.newNetwork(): We create an explicit Network object. This tells Testcontainers to provision a dedicated Docker network for our test.
  2. postgres.withNetwork(network): Both our PostgreSQLContainer and our GenericContainer for app are attached to this same network. This is crucial for them to communicate.
  3. postgres.withNetworkAliases("postgres-db"): We give the PostgreSQL container a network alias (postgres-db). This name will be resolvable by other containers on the same network.
  4. app.withEnv("DB_HOST", "postgres-db"): Inside our app container, we set the DB_HOST environment variable to postgres-db. Because app is on the same network as postgres, Docker’s DNS will resolve postgres-db to the correct IP address of the PostgreSQL container. No need for localhost or published ports!
  5. app.dependsOn(postgres): This ensures PostgreSQL starts and is ready before our application container attempts to start.
  6. ImageFromDockerfile: This dynamically builds our myapp-docker image at test runtime, ensuring the latest myapp-runner.jar is included.
  7. app.waitingFor(...): Our test asserts success by checking the app container’s exit code and logs, as our dummy app exits on success or failure.

2. JavaScript/TypeScript Example: Multi-Service Testing

Let’s do the same with Node.js and TypeScript.

Project Setup: Create a new Node.js project: npm init -y npm install typescript ts-node @types/node @testcontainers/testcontainers @testcontainers/postgresql @types/jest jest --save-dev Create tsconfig.json for TypeScript compilation:

// tsconfig.json
{
  "compilerOptions": {
    "target": "ES2020",
    "module": "CommonJS",
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "strict": true,
    "skipLibCheck": true,
    "outDir": "./dist"
  },
  "include": ["src/**/*.ts", "test/**/*.ts"],
  "exclude": ["node_modules"]
}

Step 1: Create a simple Dockerfile for our ‘App’ Create myapp-docker/Dockerfile in your project root.

# myapp-docker/Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY src/app.js .
EXPOSE 3000
CMD ["node", "app.js"]

Step 2: Create the Dummy Node.js Application (app.ts) Create this in src/app.ts. This app will try to connect to PostgreSQL using environment variables.

// src/app.ts
import pg from 'pg'; // For simplicity, we'll use a direct PG client here
import { setTimeout } from 'timers/promises';

const DB_HOST = process.env.DB_HOST || 'localhost';
const DB_PORT = parseInt(process.env.DB_PORT || '5432', 10);
const DB_NAME = process.env.DB_NAME || 'testdb';
const DB_USER = process.env.DB_USER || 'testuser';
const DB_PASSWORD = process.env.DB_PASSWORD || 'testpass';

console.log(`[MyApp] Attempting to connect to PostgreSQL at ${DB_HOST}:${DB_PORT}/${DB_NAME}`);

const connectToDb = async () => {
  const client = new pg.Client({
    host: DB_HOST,
    port: DB_PORT,
    database: DB_NAME,
    user: DB_USER,
    password: DB_PASSWORD,
  });

  try {
    await client.connect();
    console.log('[MyApp] Successfully connected to PostgreSQL database!');
    const res = await client.query('SELECT 1 as test');
    console.log(`[MyApp] Successfully executed test query: ${JSON.stringify(res.rows)}`);
    await client.end();
    process.exit(0); // Indicate success
  } catch (error) {
    console.error('[MyApp] Failed to connect to PostgreSQL:', error);
    process.exit(1); // Indicate failure
  }
};

// Give it a little delay to ensure DB is fully up if dependsOn isn't perfect
setTimeout(2000).then(connectToDb);

Note: For pg to be available inside the container, we need to add it to package.json. Update package.json with a dependency: npm install pg (or yarn add pg)

Step 3: Write the TypeScript Test (multi-service.test.ts) Create this in test/multi-service.test.ts.

// test/multi-service.test.ts
import { GenericContainer, Network } from 'testcontainers';
import { PostgreSqlContainer } from '@testcontainers/postgresql';
import { resolve } from 'path';

describe('Multi-Service Test with Testcontainers', () => {
    let network: Network;
    let postgresContainer: PostgreSqlContainer;
    let appContainer: GenericContainer;

    beforeAll(async () => {
        // 1. Create a shared network
        network = new Network();

        // 2. Start PostgreSQL container on the shared network
        postgresContainer = await new PostgreSqlContainer('postgres:15-alpine')
            .withDatabase('testdb')
            .withUsername('testuser')
            .withPassword('testpass')
            .withNetwork(network) // Attach to our shared network
            .withNetworkAliases('postgres-db') // Give it an alias
            .start();

        // Ensure app.ts is compiled to app.js for the Dockerfile COPY step
        // In a real project, this would be part of your build pipeline
        // For this example, assume `tsc` has been run or src/app.ts directly accessible
        // Let's manually compile for clarity here if running via `ts-node` directly for test
        // Or ensure your Dockerfile copies the .ts and compiles, or runs ts-node directly.
        // For simplicity, we'll compile to `dist/app.js` first and copy that.
        // This requires `npm run build` or `tsc` before running tests.
        // For this example, let's just make the Dockerfile copy `src/app.ts` and run it via `ts-node` directly.
        // Re-adjust Dockerfile:
        // FROM node:18-alpine
        // WORKDIR /app
        // COPY package.json .
        // COPY package-lock.json .
        // RUN npm install
        // RUN npm install -g ts-node typescript # Install ts-node
        // COPY src/app.ts .
        // EXPOSE 3000
        // CMD ["ts-node", "app.ts"]

        // 3. Start our application container
        appContainer = await new GenericContainer('node:18-alpine')
            .withNetwork(network) // Attach to the SAME shared network
            .withNetworkAliases('my-app') // Give it an alias (optional for this app)
            .withCopyFilesToContainer([{ source: resolve('./src/app.ts'), target: '/app/app.ts' }])
            .withCopyFilesToContainer([{ source: resolve('./package.json'), target: '/app/package.json' }])
            .withCopyFilesToContainer([{ source: resolve('./package-lock.json'), target: '/app/package-lock.json' }])
            .withCommand(['/bin/sh', '-c', 'npm install && npx ts-node /app/app.ts']) // Install deps and run ts-node
            .withEnv('DB_HOST', 'postgres-db') // Use the network alias!
            .withEnv('DB_PORT', '5432')
            .withEnv('DB_NAME', 'testdb')
            .withEnv('DB_USER', 'testuser')
            .withEnv('DB_PASSWORD', 'testpass')
            .withLogConsumer((log) => process.stdout.write(log.content)); // Print app logs

        // Start the app container and wait for its connection attempt
        // We'll manually check its exit code as it's a short-lived connection attempt
        await appContainer.start();
        const exitCode = (await appContainer.stop()).exitCode; // Stop also returns exit code
        expect(exitCode).toBe(0); // Assert successful connection
    }, 60000); // Increase timeout for setup

    afterAll(async () => {
        await appContainer.stop();
        await postgresContainer.stop();
        await network.stop(); // Stop the network
    });

    it('should allow the application to connect to PostgreSQL', async () => {
        // Assertion already made in beforeAll by checking appContainer's exit code
        // For a more persistent app, you'd make API calls here.
        expect(true).toBe(true); // Placeholder, actual assertion done above.
    });
});

Explanation:

  1. new Network(): Similar to Java, we explicitly create a network instance.
  2. .withNetwork(network): Both containers join this shared network.
  3. .withNetworkAliases('postgres-db'): Sets the alias for PostgreSQL.
  4. appContainer.withEnv('DB_HOST', 'postgres-db'): The Node.js app uses this alias as the database host.
  5. withCopyFilesToContainer: This is a convenient way to copy local files into a GenericContainer without needing a full Dockerfile for simple apps.
  6. withCommand: We instruct the GenericContainer to first install npm dependencies and then run our app.ts using ts-node.
  7. appContainer.stop()).exitCode: We leverage the Node.js testcontainers library’s ability to return the container’s exit code upon stopping. A 0 indicates successful connection for our dummy app.

3. Python Example: Multi-Service Testing

Now for our Python friends, using testcontainers-python.

Project Setup: Create a new project directory. pip install pytest testcontainers-core testcontainers-postgresql psycopg2-binary

Step 1: Create a simple Dockerfile for our ‘App’ Create myapp-docker/Dockerfile in your project root.

# myapp-docker/Dockerfile
FROM python:3.10-alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
EXPOSE 8000
CMD ["python", "app.py"]

And requirements.txt in the same directory:

# myapp-docker/requirements.txt
psycopg2-binary

Step 2: Create the Dummy Python Application (app.py) Create this in myapp-docker/app.py.

# myapp-docker/app.py
import os
import psycopg2
import time
import sys

DB_HOST = os.getenv("DB_HOST", "localhost")
DB_PORT = os.getenv("DB_PORT", "5432")
DB_NAME = os.getenv("DB_NAME", "testdb")
DB_USER = os.getenv("DB_USER", "testuser")
DB_PASSWORD = os.getenv("DB_PASSWORD", "testpass")

print(f"[MyApp] Attempting to connect to PostgreSQL at {DB_HOST}:{DB_PORT}/{DB_NAME}")

def connect_to_db():
    for i in range(5): # Retry connection a few times
        try:
            conn = psycopg2.connect(
                host=DB_HOST,
                port=DB_PORT,
                database=DB_NAME,
                user=DB_USER,
                password=DB_PASSWORD
            )
            print("[MyApp] Successfully connected to PostgreSQL database!")
            with conn.cursor() as cur:
                cur.execute("SELECT 1")
                print(f"[MyApp] Successfully executed test query: {cur.fetchone()}")
            conn.close()
            return True
        except psycopg2.OperationalError as e:
            print(f"[MyApp] Connection attempt {i+1} failed: {e}")
            time.sleep(2)
    return False

if __name__ == "__main__":
    if connect_to_db():
        sys.exit(0) # Success
    else:
        sys.exit(1) # Failure

Step 3: Write the Python Test (test_multi_service.py) Create this in your project root.

# test_multi_service.py
import pytest
import os
from testcontainers.core.container import GenericContainer
from testcontainers.core.network import Network
from testcontainers.postgresql import PostgreSQLContainer
from testcontainers.core.waiting_utils import wait_container_is_ready, HealthcheckWaitStrategy

@pytest.fixture(scope="session")
def shared_network():
    # 1. Define a shared network
    with Network() as network:
        yield network

@pytest.fixture(scope="session")
def postgres_container(shared_network):
    # 2. Define the PostgreSQL container
    with PostgreSQLContainer("postgres:15-alpine") \
            .with_database_name("testdb") \
            .with_username("testuser") \
            .with_password("testpass") \
            .with_network(shared_network) \
            .with_network_aliases("postgres-db") \
            as postgres:
        postgres.start()
        yield postgres

@pytest.fixture(scope="session")
def app_container(postgres_container, shared_network):
    # Get the path to our app-docker directory
    current_dir = os.path.dirname(os.path.abspath(__file__))
    app_docker_path = os.path.join(current_dir, "myapp-docker")

    # 3. Define our custom application container
    with GenericContainer(image="myapp-image", dockerfile_dir=app_docker_path) \
            .with_network(shared_network) \
            .with_env("DB_HOST", "postgres-db") \
            .with_env("DB_PORT", "5432") \
            .with_env("DB_NAME", "testdb") \
            .with_env("DB_USER", "testuser") \
            .with_env("DB_PASSWORD", "testpass") \
            .with_wait_for_log("Successfully connected to PostgreSQL database!") \
            .with_command(["python", "app.py"]) \
            as app:
        app.start()
        yield app

def test_app_connects_to_postgres(app_container):
    # Our app runs, connects, and exits. The fixture waits for the log message.
    # If it reaches here, connection was successful.
    logs = app_container.get_logs()
    assert "Successfully connected to PostgreSQL database!" in logs
    # For a short-lived app like this, checking the exit code would be another good option
    # However, with_wait_for_log ensures the success condition was met.
    # To check exit code explicitly after it stops:
    # app_container.stop() # This would stop it if it's still running, and return the exit code
    # assert app_container.get_wrapped_container().wait()['StatusCode'] == 0

Explanation:

  1. @pytest.fixture(scope="session"): Pytest fixtures manage the lifecycle of our containers, ensuring they start once for the session and are cleaned up.
  2. with Network() as network: Creates a network object, which automatically handles cleanup with with.
  3. .with_network(shared_network): Both PostgreSQLContainer and GenericContainer join the same network.
  4. .with_network_aliases("postgres-db"): Sets the alias for PostgreSQL.
  5. .with_env("DB_HOST", "postgres-db"): The Python app uses this alias as the database host.
  6. dockerfile_dir=app_docker_path: Testcontainers will build an image named “myapp-image” from the Dockerfile in myapp-docker directory and use it.
  7. .with_wait_for_log("Successfully connected to PostgreSQL database!"): This is a robust wait strategy for Python, ensuring the container has successfully logged the connection message before the test proceeds. If the log message isn’t found, the test will time out.

Mini-Challenge: Add a Redis Cache

You’ve successfully connected your app to PostgreSQL. Now, let’s make it a bit more complex. Your challenge is to:

  1. Add a Redis container to the existing shared network.
  2. Modify your application (the dummy app you created) to also attempt to connect to the Redis container.
    • Hint: Add new environment variables like REDIS_HOST, REDIS_PORT.
    • Modify your app code to attempt a Redis connection (e.g., set and get a simple key). You’ll need to add a Redis client library to your app’s dependencies (e.g., Jedis for Java, ioredis for Node.js, redis-py for Python).
  3. Update your Testcontainers test to assert that both PostgreSQL and Redis connections were successful from within your application container.

Take your time, review the previous code, and think about how you’d add another service to the network and configure your application.

Hint:

  • For Redis, use RedisContainer (Testcontainers Java), RedisContainer (Testcontainers Node.js), or RedisContainer (Testcontainers Python).
  • Remember to use withNetwork() to attach Redis to the same shared network.
  • Give the Redis container a withNetworkAliases() (e.g., “redis-cache”).
  • Update your app container’s environment variables to point to “redis-cache” for the host.
  • Ensure your app’s Dockerfile (or withCopyFilesToContainer for JS/Python) includes the Redis client library.

Common Pitfalls & Troubleshooting

Networking can sometimes be tricky. Here are some common issues and how to resolve them:

  1. “Host not found” / “Connection refused” errors within application container:

    • Symptom: Your app container’s logs show it can’t resolve the database/service hostname or connect to it.
    • Cause: The application container and the service container (e.g., PostgreSQL) are not on the same Docker network. Or, the hostname used by the app doesn’t match the service container’s network alias.
    • Fix: Double-check that withNetwork(yourSharedNetwork) is applied to all containers that need to communicate. Verify that withNetworkAliases() is set on the service container and that your application’s DB_HOST (or equivalent) environment variable correctly references this alias.
  2. Container startup order issues (dependsOn not enough / connection timeouts):

    • Symptom: Your application starts before the dependent service (e.g., database) is fully initialized and ready to accept connections, leading to initial connection failures. dependsOn only guarantees startup order, not readiness.
    • Cause: The dependent service needs time to boot up and listen on its port.
    • Fix: Implement proper wait strategies. For databases, PostgreSQLContainer (and others) have built-in wait strategies. For your custom app, consider Wait.forLogMessage() (Java), .withWaitStrategy(Wait.forLogMessage(...)) (Node.js), or .with_wait_for_log() (Python) to wait for a specific log message indicating readiness. You can also add retry logic within your application’s connection code, as shown in the Python example.
  3. Misconfiguring DockerComposeContainer:

    • Symptom: Services in your docker-compose.yml don’t start, or your test can’t connect to them.
    • Cause: Incorrect path to the docker-compose.yml file, or issues within the Compose file itself (e.g., syntax errors, unsupported Docker Compose version features for the Docker daemon).
    • Fix:
      • Ensure the path provided to new DockerComposeContainer(...) is correct and relative to your project or explicitly absolute.
      • Validate your docker-compose.yml syntax (docker compose config).
      • Check DockerComposeContainer logs for specific errors during startup.
      • Remember to use withExposedService() to declare which services/ports from the Compose file Testcontainers should expose and wait for.
  4. Resource exhaustion / Too many containers:

    • Symptom: Tests become very slow, Docker runs out of memory/disk space, or your CI/CD pipeline fails due to resource limits.
    • Cause: Creating too many throwaway containers or not cleaning them up efficiently.
    • Fix:
      • Use scope="session" for Pytest fixtures or @Container for JUnit to reuse containers across multiple tests within a class or session.
      • Ensure afterAll/tearDown/finally blocks correctly stop() and close() all containers and networks.
      • Consider Testcontainers’ reuse feature (experimental but powerful for CI). We’ll cover this in a later chapter!

Summary

Phew! You’ve tackled some advanced concepts today! Here’s a quick recap of what you’ve mastered:

  • User-Defined Networks: The modern, robust way for Docker containers to communicate, providing DNS resolution by container name.
  • Testcontainers Network Management: How Testcontainers transparently creates and manages these networks, and how you explicitly attach containers using withNetwork() and withNetworkAliases().
  • Inter-Container Communication: Your application can now connect to other services (like databases or message brokers) using their network aliases as hostnames, mimicking a real production environment.
  • DockerComposeContainer: The ultimate tool for testing complex, multi-service stacks defined by a docker-compose.yml file, bringing high-fidelity integration testing to your fingertips.
  • Multi-Language Examples: You’ve seen these patterns implemented in Java, JavaScript/TypeScript, and Python, solidifying your understanding across different ecosystems.
  • Troubleshooting: You’re now equipped to diagnose and fix common networking issues in your containerized tests.

You’re no longer just testing isolated components; you’re building a miniature, disposable replica of your entire service landscape for your tests. This ability is invaluable for ensuring your microservices play nicely together, long before they hit production.

In our next chapter, we’ll delve into Performance Tuning and Reuse Strategies, learning how to make your containerized tests lightning fast and resource-efficient, especially critical for CI/CD pipelines!


References


This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.