Welcome back, future DevOps guru! In our previous Kubernetes adventures, we learned about the fundamental building blocks like Pods, Deployments, and Services. We even deployed a single application. But what happens when your application isn’t just one component, but a collection of interconnected services, like a frontend web app talking to a backend API, which might then talk to a database?
That’s the real world of modern applications, often built using a microservices architecture. In this chapter, we’re going to tackle a crucial next step: deploying a multi-service application to Kubernetes. This project will solidify your understanding of how different parts of an application communicate within the Kubernetes ecosystem and how to expose them to the outside world.
By the end of this chapter, you’ll have hands-on experience deploying a simple multi-service application, understanding the YAML configurations for each component, and seeing how they all fit together to form a cohesive system. Get ready to put your Kubernetes knowledge into action!
Prerequisites
Before we dive in, make sure you’re comfortable with:
- Docker: Building images and understanding containers.
- Kubernetes Fundamentals: Pods, Deployments, Services (ClusterIP, NodePort), and basic
kubectlcommands. - YAML Syntax: Writing and understanding Kubernetes manifest files.
- Minikube or a Kubernetes Cluster: You’ll need a running Kubernetes environment to deploy our application. Minikube is perfect for local development.
If any of these feel a bit fuzzy, quickly review the previous chapters on Docker and Kubernetes!
Core Concepts: Building a Multi-Service App on Kubernetes
Deploying a multi-service application means orchestrating several independent, yet interconnected, components. Let’s break down the key concepts we’ll be using.
Our Sample Application: A Simple “Hello World” Frontend and Backend
To keep our focus on Kubernetes orchestration rather than complex application logic, we’ll use a very basic application:
- Frontend Service: A simple web application (e.g., using Node.js or Python Flask) that displays “Hello from Frontend!” and makes a request to a backend service to get another greeting.
- Backend Service: Another simple API (e.g., Node.js or Python Flask) that responds with “Hello from Backend!”.
This architecture is a common pattern: a user-facing frontend consuming data or services from a backend API.
Figure 13.1: Simplified Multi-Service Application Architecture
Structuring Kubernetes Manifests
When dealing with multiple services, it’s good practice to organize your Kubernetes YAML files. You could put everything in one giant file, but that quickly becomes unmanageable. Instead, we’ll create separate YAML files for each component:
backend-deployment.yamlbackend-service.yamlfrontend-deployment.yamlfrontend-service.yaml
This modular approach makes your configuration easier to read, maintain, and troubleshoot.
Inter-Service Communication: The Magic of Service DNS
How does our frontend know where to find the backend? In Kubernetes, this is handled beautifully by Service DNS. When you create a Service of type: ClusterIP (the default), Kubernetes automatically assigns it a stable DNS name within the cluster.
The format is typically [service-name].[namespace].svc.cluster.local. If you’re in the same namespace, you can often just use [service-name]. So, if our backend service is named backend-service, the frontend can simply make HTTP requests to http://backend-service:port. Kubernetes’ internal DNS resolver takes care of routing the request to the correct backend Pods. This is a powerful feature that simplifies service discovery immensely.
Persistent Storage (A Quick Note)
For this specific project, our backend will be stateless (just returning a string), so we won’t need persistent storage. However, for applications with databases (like our previous mongodb example), you would need to introduce PersistentVolume (PV) and PersistentVolumeClaim (PVC) resources to ensure your data survives Pod restarts. We’ll revisit this in more advanced projects.
Step-by-Step Implementation
Let’s get our hands dirty! We’ll start by creating our simple application components, containerizing them, and then writing the Kubernetes manifests to deploy them.
Step 1: Prepare Our Application Docker Images
First, let’s create our very simple frontend and backend applications and their Dockerfiles.
Backend Application (backend/app.js)
This simple Node.js Express app will respond to requests with “Hello from Backend!”.
// backend/app.js
const express = require('express');
const app = express();
const port = 3001; // Backend will listen on port 3001
app.get('/', (req, res) => {
res.send('Hello from Backend!');
});
app.listen(port, () => {
console.log(`Backend service listening at http://localhost:${port}`);
});
Backend Dockerfile (backend/Dockerfile)
# backend/Dockerfile
# Use an official Node.js runtime as a parent image
FROM node:20-alpine
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json to the working directory
# We copy these separately to leverage Docker's layer caching
COPY package*.json ./
# Install app dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app runs on
EXPOSE 3001
# Define the command to run the app
CMD ["node", "app.js"]
Frontend Application (frontend/app.js)
This Node.js Express app will serve a simple HTML page and try to fetch data from the backend.
// frontend/app.js
const express = require('express');
const axios = require('axios'); // For making HTTP requests
const app = express();
const port = 3000; // Frontend will listen on port 3000
// Define the backend service URL using an environment variable
// In Kubernetes, this will be the service name
const BACKEND_URL = process.env.BACKEND_URL || 'http://localhost:3001';
app.get('/', async (req, res) => {
let backendMessage = 'Backend not reachable';
try {
// Attempt to fetch data from the backend service
const response = await axios.get(BACKEND_URL);
backendMessage = response.data;
} catch (error) {
console.error('Error fetching from backend:', error.message);
}
res.send(`
<h1>Hello from Frontend!</h1>
<p>Message from Backend: <strong>${backendMessage}</strong></p>
<p>This frontend is configured to reach the backend at: ${BACKEND_URL}</p>
`);
});
app.listen(port, () => {
console.log(`Frontend service listening at http://localhost:${port}`);
});
Frontend Dockerfile (frontend/Dockerfile)
# frontend/Dockerfile
# Use an official Node.js runtime as a parent image
FROM node:20-alpine
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install app dependencies (including axios)
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run the app
CMD ["node", "app.js"]
Shared package.json for both (inside backend/ and frontend/ folders)
For the backend:
// backend/package.json
{
"name": "backend",
"version": "1.0.0",
"description": "Simple backend service for Kubernetes project",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"express": "^4.18.2"
}
}
For the frontend:
// frontend/package.json
{
"name": "frontend",
"version": "1.0.0",
"description": "Simple frontend service for Kubernetes project",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"express": "^4.18.2",
"axios": "^1.6.5"
}
}
Build and Push Docker Images
Now, let’s build these images. For simplicity, we’ll build them locally and tag them for Minikube’s internal Docker daemon. If you were deploying to a cloud cluster, you’d push them to a Docker Registry (like Docker Hub or your cloud provider’s registry).
First, ensure your Minikube Docker environment is set up:
# Point your shell to Minikube's Docker daemon
eval $(minikube docker-env)
Now, navigate to your backend directory and build:
cd backend
docker build -t multi-service-backend:1.0.0 .
cd ..
Explanation:
cd backend: Change directory into the backend application folder.docker build -t multi-service-backend:1.0.0 .: Builds the Docker image.-t multi-service-backend:1.0.0: Tags the image with a name and version. This is crucial for Kubernetes to find it..: Specifies the build context (the current directory).
Repeat for the frontend:
cd frontend
docker build -t multi-service-frontend:1.0.0 .
cd ..
Explanation: Similar to the backend, we’re building and tagging the frontend image.
You can verify the images are available in Minikube’s Docker daemon:
docker images | grep "multi-service"
Step 2: Deploy the Backend Service to Kubernetes
We’ll start with the backend, as the frontend depends on it.
Create backend-deployment.yaml
This file defines how our backend Pods will be created and managed.
# backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: multi-service-app
tier: backend
spec:
replicas: 1 # Start with one replica for the backend
selector:
matchLabels:
app: multi-service-app
tier: backend
template:
metadata:
labels:
app: multi-service-app
tier: backend
spec:
containers:
- name: backend-container
image: multi-service-backend:1.0.0 # Our locally built image
imagePullPolicy: Never # Crucial for Minikube, tells K8s not to pull from external registry
ports:
- containerPort: 3001 # The port our Node.js app listens on
Explanation of backend-deployment.yaml:
apiVersion: apps/v1: Specifies the API version for Deployment resources.apps/v1is the current stable version for Deployments (as of 2026).kind: Deployment: Declares this resource as a Deployment.metadata.name: A unique name for our Deployment.metadata.labels: Key-value pairs that help organize and select resources. We useappandtierlabels.spec.replicas: The desired number of identical Pods. We’ll start with1.spec.selector.matchLabels: This tells the Deployment which Pods it manages. It must match the labels defined intemplate.metadata.labels.spec.template: This is the blueprint for the Pods the Deployment will create.metadata.labels: Labels for the Pods. These are used by Services to select Pods.spec.containers: An array defining the containers within each Pod.name: Name of the container.image: The Docker image to use. This should match the tag we used earlier.imagePullPolicy: Never: This is vital when using locally built images with Minikube. It tells Kubernetes not to try pulling the image from an external registry (like Docker Hub) but to use the image already available in Minikube’s local Docker daemon. For production, you’d typically useAlwaysorIfNotPresentand push your images to a registry.ports.containerPort: The port your application inside the container listens on (3001 for our backend).
Apply this manifest:
kubectl apply -f backend-deployment.yaml
Verify the deployment and Pod:
kubectl get deployment backend-deployment
kubectl get pods -l tier=backend
Create backend-service.yaml
Now, let’s create a Service to expose our backend Pods internally within the cluster.
# backend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: backend-service # This name will be used for inter-service communication
labels:
app: multi-service-app
tier: backend
spec:
selector:
app: multi-service-app
tier: backend # This selects the Pods created by backend-deployment
ports:
- protocol: TCP
port: 3001 # The port the service itself listens on
targetPort: 3001 # The port on the Pod to forward traffic to
type: ClusterIP # Default, but explicitly stated for clarity
Explanation of backend-service.yaml:
apiVersion: v1: The API version for Service resources.v1is current and stable.kind: Service: Declares this resource as a Service.metadata.name: The name of the Service. This is the DNS name other services will use to find it (e.g.,http://backend-service:3001).spec.selector: This is how the Service finds its target Pods. It matches the labels defined on the Pods (app: multi-service-app,tier: backend).spec.ports: Defines the ports the Service exposes.port: The port that the Service itself will listen on within the cluster.targetPort: The port on the Pod that the Service should forward traffic to.
type: ClusterIP: This creates an internal-only IP address, making the Service reachable only from within the cluster. This is perfect for backend services.
Apply this manifest:
kubectl apply -f backend-service.yaml
Verify the service:
kubectl get service backend-service
You should see an IP address and CLUSTER-IP for the service.
Step 3: Deploy the Frontend Service to Kubernetes
Next, we’ll deploy our frontend, which needs to know how to talk to the backend.
Create frontend-deployment.yaml
# frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: multi-service-app
tier: frontend
spec:
replicas: 1
selector:
matchLabels:
app: multi-service-app
tier: frontend
template:
metadata:
labels:
app: multi-service-app
tier: frontend
spec:
containers:
- name: frontend-container
image: multi-service-frontend:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 3000 # Frontend app listens on port 3000
env: # This is where we tell the frontend how to find the backend
- name: BACKEND_URL
value: "http://backend-service:3001" # Service name + port
Explanation of frontend-deployment.yaml:
- Most fields are similar to
backend-deployment.yaml. image: multi-service-frontend:1.0.0: Uses our frontend image.ports.containerPort: 3000: The port our frontend Node.js app listens on.env: This is a critical part for inter-service communication.- We define an environment variable
BACKEND_URLinside the frontend container. - Its
valueis set tohttp://backend-service:3001.backend-serviceis the name of our Kubernetes Service for the backend, and3001is the port that Service exposes. Kubernetes’ internal DNS ensures this name resolves correctly to the backend Pods.
- We define an environment variable
Apply this manifest:
kubectl apply -f frontend-deployment.yaml
Verify the deployment and Pod:
kubectl get deployment frontend-deployment
kubectl get pods -l tier=frontend
Step 4: Expose the Frontend Service Externally
Finally, we need a way to access our frontend from our web browser.
Create frontend-service.yaml
# frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
app: multi-service-app
tier: frontend
spec:
selector:
app: multi-service-app
tier: frontend # Selects the Pods created by frontend-deployment
ports:
- protocol: TCP
port: 80 # The port the service itself listens on (standard HTTP)
targetPort: 3000 # The port on the Pod to forward traffic to (frontend app port)
nodePort: 30080 # Optional: A specific port on the Node for NodePort type (must be 30000-32767)
type: NodePort # Exposes the service on a port on each Node
Explanation of frontend-service.yaml:
type: NodePort: This Service type makes the frontend accessible from outside the cluster. Kubernetes will open a static port on each Node (thenodePort) and forward traffic from that port to thetargetPortof the selected Pods.port: 80: The service itself listens on port 80 within the cluster.targetPort: 3000: Traffic is forwarded to port 3000 on the frontend Pods.nodePort: 30080: We explicitly set anodePort. If omitted, Kubernetes assigns a random port in the 30000-32767 range. Using a specific port can be convenient for testing.
Apply this manifest:
kubectl apply -f frontend-service.yaml
Verify the service:
kubectl get service frontend-service
Look for the PORT(S) column. You should see 80:30080/TCP. This means the service is listening on port 80 internally, and traffic to port 30080 on the Node will be routed to it.
Access Your Application!
If you’re using Minikube, you can get the URL to access your application:
minikube service frontend-service --url
This command will output a URL like http://192.168.49.2:30080. Copy this URL and paste it into your web browser.
You should see:
Hello from Frontend!
Message from Backend: Hello from Backend!
This frontend is configured to reach the backend at: http://backend-service:3001
Congratulations! You’ve successfully deployed a multi-service application to Kubernetes, with inter-service communication and external access!
Mini-Challenge: Scale Your Backend
You’ve got a working multi-service app. Now, let’s test Kubernetes’ scaling capabilities.
Challenge: Increase the number of backend replicas to 3. Observe what happens to the backend Pods.
Hint: You can either edit the backend-deployment.yaml file and kubectl apply -f it again, or use the kubectl scale command for a quick change.
What to Observe/Learn:
- How quickly Kubernetes creates new Pods.
- The status of the new Pods (
Running). - How the
backend-serviceautomatically distributes traffic across all3backend Pods without any additional configuration. This is the power of Services!
Once done, you can scale it back down to 1 or 0 (to delete all backend Pods) if you wish.
# Example hint:
kubectl scale deployment/backend-deployment --replicas=3
Common Pitfalls & Troubleshooting
Working with multi-service applications on Kubernetes can introduce new challenges. Here are some common issues and how to approach them:
ImagePullBackOfffor your custom images:- Symptom: Your Pods stay in a
PendingorErrImagePullstate, andkubectl describe pod <pod-name>showsImagePullBackOff. - Cause: Kubernetes can’t find or access your Docker image.
- Troubleshooting:
- Did you build the image with the correct tag (
multi-service-backend:1.0.0)? - If using Minikube, did you run
eval $(minikube docker-env)before building, and setimagePullPolicy: Neverin your Deployment manifest? - If using a cloud cluster, did you push your images to a public or authenticated private Docker registry? Is the
imagePullPolicyappropriate (e.g.,IfNotPresentorAlways)?
- Did you build the image with the correct tag (
- Symptom: Your Pods stay in a
Frontend cannot connect to Backend (
Backend not reachable):- Symptom: The frontend loads, but the message from the backend says “Backend not reachable” or similar error.
- Cause: The frontend container cannot resolve the backend Service name or reach its port.
- Troubleshooting:
- Check Backend Pods: Are the backend Pods running correctly? (
kubectl get pods -l tier=backend) - Check Backend Service: Is the
backend-servicerunning and does itsselectormatch thelabelsof the backend Pods? (kubectl get service backend-service,kubectl describe service backend-service). - Check
BACKEND_URLin Frontend Deployment: Does theenv.BACKEND_URLinfrontend-deployment.yamlcorrectly point tohttp://backend-service:3001(or whatever your service name and port are)? - Check Port Mismatch: Does the
targetPortinbackend-service.yamlmatch thecontainerPortinbackend-deployment.yaml? And does theportofbackend-servicematch the port in theBACKEND_URL?
- Check Backend Pods: Are the backend Pods running correctly? (
External access not working for Frontend:
- Symptom: You can’t reach the frontend application from your browser.
- Cause: Issues with the
NodePortService or firewall rules. - Troubleshooting:
- Check Frontend Service: Is
frontend-servicecreated and oftype: NodePort? (kubectl get service frontend-service) - Minikube URL: Are you using the correct URL provided by
minikube service frontend-service --url? - Firewall: On cloud environments, ensure your security groups/firewalls allow inbound traffic on the
NodePort(e.g., 30080).
- Check Frontend Service: Is
General Troubleshooting Commands:
kubectl get pods -o wide: See all Pods, their status, and which Node they’re on.kubectl describe pod <pod-name>: Get detailed information about a Pod, including events, errors, and container status.kubectl logs <pod-name> -c <container-name>: View the logs of a specific container within a Pod. Crucial for application-level errors.kubectl get events: See cluster-level events, which can reveal issues like failed scheduling or image pulls.
Summary
Phew! You’ve just completed another significant project. Here’s what you’ve achieved and learned in this chapter:
- Multi-Service Deployment: You successfully deployed an application composed of separate frontend and backend services.
- Kubernetes Manifest Organization: You learned to structure your Kubernetes configurations into separate, manageable YAML files.
- Inter-Service Communication: You utilized Kubernetes’ built-in Service DNS to enable your frontend to seamlessly communicate with your backend using simple service names.
- External Exposure: You exposed your frontend application to the outside world using a
NodePortService, making it accessible from your browser. - Scaling: You briefly explored how easy it is to scale components of your application using
kubectl scale, leveraging Kubernetes’ orchestration capabilities. - Troubleshooting: You gained insight into common issues and essential
kubectlcommands for diagnosing problems in a multi-service setup.
This project is a critical step towards understanding real-world application deployments on Kubernetes. You’re now equipped to handle more complex microservices architectures.
What’s Next?
In the upcoming chapters, we’ll continue to build on this foundation. We’ll explore more advanced Kubernetes features like Ingress for smarter external routing, introduce persistent storage for stateful applications, and integrate these deployments into a full CI/CD pipeline using tools like GitHub Actions or Jenkins. Keep up the great work!
References
- Kubernetes Documentation: Deployments
- Kubernetes Documentation: Services
- Kubernetes Documentation: Pods
- Minikube Documentation
- Node.js Docker Official Image
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.