Introduction: The Power of Portable Environments
Welcome to Chapter 6! So far, we’ve laid a strong foundation with Linux fundamentals, version control using Git and GitHub, and even dipped our toes into CI/CD with GitHub Actions and Jenkins. You’ve learned how to manage your code and automate basic workflows. But what happens when your perfectly working code on your machine suddenly breaks when deployed to a server? This frustrating scenario, often called “dependency hell” or “it works on my machine,” is a common headache in software development.
This is where Docker comes to the rescue! In this chapter, we’re going to dive deep into Docker, a revolutionary technology that packages your application and all its dependencies into a single, isolated unit called a container. Imagine a tiny, self-contained world where your application always finds exactly what it needs, regardless of where it runs. This guarantees consistency from development to production, making deployments predictable and reliable.
By the end of this chapter, you’ll understand the core concepts of Docker, how to install it, run your first containers, and begin to appreciate why it’s an indispensable tool in the modern DevOps toolkit. Get ready to containerize your world!
Core Concepts: What Exactly is Docker?
Before we start typing commands, let’s build a solid understanding of what Docker is and the fundamental ideas behind it.
What is Docker?
At its heart, Docker is a platform that allows you to develop, ship, and run applications inside containers. Think of a container like a standardized shipping container. Just as a physical shipping container can hold anything (furniture, electronics, food) and be transported anywhere (ship, train, truck) without worrying about its contents, a Docker container can hold any application and run on any machine that has Docker installed, without worrying about the underlying operating system’s specific configurations or dependencies.
This “packaging” includes everything your application needs: the code, runtime, system tools, system libraries, and settings.
Virtual Machines vs. Containers: A Key Distinction
To truly grasp Docker’s power, it’s helpful to understand how containers differ from traditional Virtual Machines (VMs). You might be familiar with VMs as a way to run multiple operating systems on a single physical server.
Let’s visualize the difference:
Key Differences:
- Operating System: VMs include a full-fledged guest operating system for each VM, leading to significant resource overhead (CPU, RAM, disk space). Containers, on the other hand, share the host operating system’s kernel. They only package the application and its necessary binaries/libraries.
- Isolation: Both provide isolation, but at different levels. VMs isolate at the hardware level (virtualized hardware), while containers isolate at the operating system process level.
- Startup Time: Because they don’t need to boot an entire OS, containers start up in seconds (or even milliseconds), compared to minutes for VMs.
- Resource Usage: Containers are much more lightweight and efficient, allowing you to run many more containers on a single host than VMs.
Why does this matter for DevOps? Lightweight, fast-starting, consistent environments are perfect for microservices, continuous integration, and rapid deployments.
Docker Engine: The Heart of Docker
The Docker Engine is the core technology that builds and runs containers. It consists of:
- Docker Daemon (dockerd): The persistent background process that manages Docker objects like images, containers, networks, and volumes.
- Docker Client (docker): The command-line interface (CLI) that allows you to interact with the Docker Daemon. When you type
docker run, you’re using the client to talk to the daemon. - REST API: An interface for programs to talk to the daemon.
Images vs. Containers: The Blueprint and the House
This is a fundamental distinction you must understand:
- Docker Image: An image is a lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files. Think of an image as a blueprint or a template. It’s immutable – once built, it doesn’t change.
- Docker Container: A container is a runnable instance of an image. When you run an image, it becomes a container. You can have multiple containers running from the same image, each completely isolated from the others. Think of a container as the actual house built from the blueprint. It’s a running process, and you can interact with it.
Docker Hub: Your Global Image Registry
Docker Hub is a cloud-based registry service provided by Docker. It’s like GitHub, but for Docker images. You can find official images for popular software (like Ubuntu, Nginx, Python) and also publish your own custom images. It’s where images are stored, shared, and pulled from.
Step-by-Step Implementation: Getting Your Hands Dirty with Docker
It’s time to install Docker and run our first containers! We’ll focus on Linux (Debian/Ubuntu) for installation, as it’s common in DevOps environments.
1. Installing Docker Engine (as of January 2026)
The Docker team regularly updates installation instructions, so it’s always best to refer to their official documentation for the absolute latest and most secure method. However, the general steps remain consistent. We’ll install the Docker Engine Community (CE) edition.
Prerequisites: You’ll need a Linux machine (e.g., Ubuntu 22.04 LTS or newer). Ensure your package manager is up to date.
# Update your package index
sudo apt update
Now, let’s install Docker. These steps are adapted from the official Docker documentation for Debian-based systems.
# 1. Install necessary packages to allow apt to use a repository over HTTPS
sudo apt install ca-certificates curl gnupg -y
# 2. Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# 3. Set up the repository
echo \
"deb [arch=\"$(dpkg --print-architecture)\" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
\"$(. /etc/os-release && echo "$VERSION_CODENAME")\" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# 4. Update the apt package index again (now with Docker's repo)
sudo apt update
# 5. Install the latest stable version of Docker Engine, containerd, and Docker Compose.
# As of Jan 2026, Docker Engine typically sees releases like 25.x or 26.x.
# `docker-ce` is the Docker Engine Community Edition.
# `docker-ce-cli` is the command-line interface.
# `containerd.io` is the container runtime.
# `docker-buildx-plugin` and `docker-compose-plugin` are for building and orchestrating.
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Why these steps?
ca-certificates curl gnupg: These are standard tools for securely fetching and verifying packages.gpg --dearmor: This decrypts the GPG key to a formataptcan use for verifying Docker packages.tee /etc/apt/sources.list.d/docker.list: This adds the Docker repository URL to your system’s package sources, tellingaptwhere to find Docker packages.docker-ce,docker-ce-cli,containerd.io: These are the core components of the Docker platform.docker-buildx-plugin,docker-compose-plugin: These add powerful features for building images and managing multi-container applications (which we’ll cover in the next chapter!).
After installation, the Docker daemon should be running automatically. You can check its status:
sudo systemctl status docker
You should see output indicating it’s active (running).
2. Verify Docker Installation
To confirm Docker is installed correctly and you can run commands, try the famous “Hello World” container:
sudo docker run hello-world
What happened?
- If Docker isn’t installed, you’d get a “command not found” error.
- If it’s installed but the daemon isn’t running, you’d get a connection error.
- If everything is working, Docker will:
- Check if the
hello-worldimage exists locally. - If not, it will pull (download) the
hello-worldimage from Docker Hub. - It will then run a new container from that image.
- The container will execute its simple program, print a message to your terminal, and then exit.
- Check if the
You’ve just run your first container! How cool is that?
3. Managing Docker Permissions (Optional but Recommended)
By default, you need sudo to run Docker commands. This can be cumbersome. To run Docker commands without sudo, you can add your user to the docker group.
# Add your current user to the docker group
sudo usermod -aG docker $USER
# Apply the new group membership (you might need to log out and back in, or reboot)
newgrp docker
Why newgrp docker? This command temporarily applies the new group membership to your current shell session without needing a full logout/login. After running it, try:
docker run hello-world
It should now work without sudo!
4. Exploring Containers: docker ps
The hello-world container ran and immediately exited. How do we see containers that are running, or have run?
To see currently running containers:
docker psYou likely won’t see anything if all your containers have exited.
To see all containers, including those that have exited:
docker ps -aNow you should see your
hello-worldcontainer(s) listed with theirCONTAINER ID,IMAGE,COMMAND,CREATED,STATUS,PORTS, andNAMES. Notice theSTATUSwill be something likeExited (0) ....
Think about it: Why is docker ps -a important for troubleshooting? (Hint: What if a container starts and immediately crashes?)
5. Running a Persistent Web Server: Nginx
Let’s run something more interactive – a web server! We’ll use Nginx, a popular web server we’ll explore in more detail later.
docker run -d -p 8080:80 --name my-nginx nginx:latest
Let’s break down this command, piece by piece:
docker run: The command to create and run a new container.-d(or--detach): This runs the container in “detached” mode, meaning it runs in the background and doesn’t tie up your terminal. You get your prompt back immediately.-p 8080:80(or--publish 8080:80): This is port mapping. It maps port80inside the container (where Nginx listens by default) to port8080on your host machine. So, when you accesslocalhost:8080on your host, Docker forwards that traffic to port80inside themy-nginxcontainer.--name my-nginx: This gives your container a human-readable name (my-nginx). If you don’t specify a name, Docker generates a random one (e.g.,optimistic_hoover). Using names makes it much easier to refer to containers later.nginx:latest: This specifies the Docker image to use. We’re telling Docker to pull thenginximage, and:latestrefers to the latest stable version tag.
After running this, you’ll see a long string (the container ID). Now, check your running containers:
docker ps
You should see my-nginx listed with a STATUS of Up ....
Challenge: Open your web browser and navigate to http://localhost:8080. What do you see? You should see the default Nginx welcome page! Congratulations, you’re serving a website from a Docker container!
6. Inspecting and Interacting with Containers
Viewing Logs: To see what’s happening inside your Nginx container, you can check its logs:
docker logs my-nginxYou’ll see Nginx access and error logs. Try refreshing
http://localhost:8080a few times and then rundocker logs my-nginxagain to see new access entries.Stopping a Container: To stop the Nginx container:
docker stop my-nginxVerify it’s stopped with
docker ps. It should no longer be listed. Checkhttp://localhost:8080– it won’t be accessible.Starting a Stopped Container: To start it again:
docker start my-nginxIt will resume from its previous state. Verify with
docker psand your browser.Executing Commands Inside a Running Container: You can run commands inside a running container, just like you would on a regular Linux machine.
docker exec -it my-nginx bashLet’s break this down:
docker exec: Executes a command in a running container.-it: This is a combination of-i(interactive) and-t(pseudo-TTY). It allows you to interact with the container’s shell.my-nginx: The name of our container.bash: The command we want to run inside the container (to open a Bash shell).
You should now see your terminal prompt change, indicating you’re inside the
my-nginxcontainer! Try some familiar Linux commands:ls -l /usr/share/nginx/html cat /etc/nginx/nginx.conf exitThe
exitcommand will bring you back to your host machine’s terminal. Thisdocker execcommand is incredibly powerful for debugging and understanding what’s going on within your containers.
7. Cleaning Up: Removing Containers and Images
Containers consume disk space. Images also consume disk space. It’s good practice to clean up what you no longer need.
Remove a Container: A container must be stopped before it can be removed.
docker stop my-nginx docker rm my-nginxNow
docker ps -ashould not showmy-nginx.Remove an Image: You can remove an image once no containers are using it.
docker rmi nginx:latestIf you get an error that the image is in use, it means a container (even stopped ones) might still be associated with it. Remove those containers first. You can also remove the
hello-worldimage:docker rmi hello-worldListing Images: To see all images on your system:
docker images
Word of caution: Be careful with docker rm and docker rmi, especially when using options like -f (force) or when removing many items at once. Always double-check what you’re removing!
Mini-Challenge: Python Web Server in a Container
Your turn! Let’s apply what you’ve learned.
Challenge:
- Run a simple Python HTTP server in a Docker container.
- Map port
8000on your host to port8000inside the container. - Access the server from your web browser.
Hint: Python has a built-in HTTP server module. If you were to run it directly on your host, the command would be python3 -m http.server 8000. Think about how to translate this into a docker run command using the python official image. You might need to specify the command to run after the image name.
What to observe/learn: How easily you can spin up a temporary, isolated development environment for different programming languages or tools without installing anything on your host machine.
Stuck? Click for a hint!
You'll want to use the `python:3.10-slim-buster` image (or similar stable Python 3 version). The command you want to run inside the container is `python3 -m http.server 8000`. Remember to detach (`-d`) and map ports (`-p`).Ready for the solution? Click here!
docker run -d -p 8000:8000 --name my-python-server python:3.10-slim-buster python3 -m http.server 8000
Then, open your browser to http://localhost:8000. You should see a directory listing of the Python container’s current working directory (usually /).
Don’t forget to clean up:
docker stop my-python-server
docker rm my-python-server
# docker rmi python:3.10-slim-buster # Only if no other containers are using it
Common Pitfalls & Troubleshooting
Even with Docker’s simplicity, you might encounter some common issues.
“Port already in use” Error:
- Problem: When you try to run a container with
-p 8080:80and port8080is already being used by another process on your host machine (either another Docker container or a native application). - Solution:
- Change the host port (
-p <NEW_HOST_PORT>:80). - Stop the existing process using that port (e.g.,
docker stop <container_name_using_port>). - Find which process is using the port on Linux:
sudo lsof -i :8080.
- Change the host port (
- Problem: When you try to run a container with
Container Exits Immediately:
- Problem: You run a container, check
docker ps, and it’s not there, butdocker ps -ashows it asExited. This often happens with containers designed to run a single command and then finish (likehello-world). For server applications (like Nginx), it means the main process within the container crashed or didn’t start correctly. - Solution:
- Check the container’s logs immediately:
docker logs <container_name_or_id>. This will usually tell you why it exited. - Ensure your
docker runcommand is correct, especially the command executed within the container. - Make sure you’re using the correct image and tag.
- Check the container’s logs immediately:
- Problem: You run a container, check
Permission Denied /
Cannot connect to the Docker daemon:- Problem: You forgot
sudoand haven’t added your user to thedockergroup, or the Docker daemon isn’t running. - Solution:
- Always use
sudo docker ...if you haven’t configured user permissions. - Add your user to the
dockergroup:sudo usermod -aG docker $USERand thennewgrp dockeror log out/in. - Check if the Docker daemon is running:
sudo systemctl status docker. If not, start it:sudo systemctl start docker.
- Always use
- Problem: You forgot
Summary
Phew! You’ve just taken your first big step into the world of containerization with Docker. Let’s recap what we’ve covered:
- What is Docker? A platform for packaging and running applications in isolated, consistent environments called containers.
- VMs vs. Containers: Containers are lightweight, share the host OS kernel, and start faster than VMs, making them ideal for modern application deployment.
- Docker Engine: The core component comprising the daemon, client, and API.
- Images vs. Containers: Images are static blueprints, containers are runnable instances of those images.
- Docker Hub: A central registry for sharing and pulling Docker images.
- Hands-on Experience: You successfully installed Docker, ran “Hello World,” deployed a persistent Nginx web server, interacted with containers using
docker exec, and managed their lifecycle (stop, start, remove). - Troubleshooting: You learned about common issues like port conflicts and container exits.
Docker is a cornerstone of modern DevOps, enabling consistency, portability, and efficiency throughout the software delivery pipeline. In the next chapter, we’ll take Docker to the next level by learning how to build our own custom Docker images using Dockerfiles and how to manage multi-container applications with Docker Compose. Get ready to create your own isolated worlds!
References
- Docker Official Documentation: Get Docker Engine - Community
- Docker Official Documentation: Docker run command
- Docker Official Documentation: Images and Containers
- Nginx Official Image on Docker Hub
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.