Welcome to your first major DevOps project! Up until now, we’ve explored individual tools and concepts: from the Linux command line to Git for version control, Docker for containerization, and the fundamentals of CI/CD. Now, it’s time to bring them all together and build something truly powerful: an End-to-End CI/CD Pipeline for a Web Application.
This chapter is your opportunity to apply everything you’ve learned in a practical, hands-on scenario. You’ll set up a complete workflow that automatically takes your code from a Git repository, builds it, tests it (conceptually for this project), containerizes it, and then prepares it for deployment. This automation is the heart of modern software delivery, enabling faster, more reliable releases.
What You’ll Learn:
- How to structure a simple web application for containerization.
- Creating a
Dockerfileto build a production-ready Docker image. - Setting up a GitHub repository and pushing your code.
- Defining a multi-stage CI/CD workflow using GitHub Actions.
- Automating the build, test, and container image creation process.
- Understanding how to push Docker images to a registry like Docker Hub.
- Implementing a basic deployment strategy using SSH to a remote server.
Prerequisites:
Before we dive in, ensure you’re comfortable with:
- Linux Command Line: Basic navigation, file operations.
- Git & GitHub: Initializing repositories, committing, pushing, creating branches.
- Docker: Building images, running containers locally.
- GitHub Actions: The basics of workflows, jobs, and steps.
- A GitHub account and a Docker Hub account (both free for basic use).
- Access to a remote Linux server/VM (e.g., AWS EC2, DigitalOcean Droplet, a local VM) where you can SSH into and install Docker. This will be our deployment target.
Ready? Let’s build some automation!
Core Concepts: The CI/CD Pipeline Blueprint
A CI/CD pipeline is a series of automated steps that ensure your software is always in a releasable state. For our project, we’ll implement a pipeline that covers the following stages:
- Source: Your application code lives in a Git repository (GitHub).
- Build: The application code is compiled or prepared. For a Node.js app, this means installing dependencies.
- Test: Automated tests are run to ensure code quality and functionality (we’ll include a placeholder for this).
- Package: The application is packaged into a deployable artifact. We’ll use Docker to create a container image.
- Publish: The Docker image is pushed to a container registry (Docker Hub).
- Deploy: The new Docker image is pulled and run on a remote server.
Let’s visualize this flow:
Our Sample Web Application: A Simple Node.js App
To keep things focused on the CI/CD process, we’ll use a very basic Node.js Express application. It will simply display “Hello, DevOps!” when accessed.
Why Node.js? It’s lightweight, easy to understand, and widely used, making it a great candidate for quick containerization.
Tools Overview for This Project
- Git & GitHub: For source code management and triggering our pipeline.
- Node.js (v20.x LTS): The runtime for our web application.
- Docker (v25.x stable): To containerize our application.
- Docker Hub: Our container image registry.
- GitHub Actions: The CI/CD orchestrator.
- Remote Linux Server: Our deployment target.
Step-by-Step Implementation
Let’s start building!
Step 1: Create the Sample Web Application
First, create a new directory for your project and navigate into it.
mkdir my-devops-webapp
cd my-devops-webapp
Now, let’s create our Node.js application files.
1. package.json
This file defines our project and its dependencies.
// my-devops-webapp/package.json
{
"name": "my-devops-webapp",
"version": "1.0.0",
"description": "A simple Node.js web application for DevOps project.",
"main": "index.js",
"scripts": {
"start": "node index.js",
"test": "echo \"No tests specified for this project yet\" && exit 0"
},
"dependencies": {
"express": "^4.18.2"
},
"author": "Your Name",
"license": "MIT"
}
Explanation:
name,version,description: Basic project metadata.main: Specifies the entry point of our application (index.js).scripts: Defines commands we can run.npm startwill executenode index.js. We’ve added a placeholdertestscript.dependencies: Lists external libraries our app needs. Here,expressis a popular web framework for Node.js.^4.18.2means “compatible with version 4.18.2 or newer minor/patch releases.”
2. index.js
This is our simple web server application code.
// my-devops-webapp/index.js
const express = require('express');
const app = express();
const port = process.env.PORT || 3000; // Use environment variable for port or default to 3000
app.get('/', (req, res) => {
res.send('<h1>Hello, DevOps! This is our first CI/CD project!</h1>');
});
app.listen(port, () => {
console.log(`Web app listening at http://localhost:${port}`);
});
Explanation:
const express = require('express');: Imports the Express library.const app = express();: Creates an Express application instance.const port = process.env.PORT || 3000;: Defines the port. Crucially, it usesprocess.env.PORT, allowing us to configure the port via an environment variable, which is a common practice in containerized environments. If not set, it defaults to3000.app.get('/', ...);: Sets up a route for the root URL (/). When a GET request comes to/, it sends back an HTML response.app.listen(port, ...);: Starts the web server, listening on the specified port.
Test Your Application Locally (Optional but Recommended)
Before containerizing, let’s ensure the app runs:
npm install # Installs the 'express' dependency
npm start # Starts the application
You should see Web app listening at http://localhost:3000. Open your browser to http://localhost:3000 and confirm you see the “Hello, DevOps!” message. Press Ctrl+C to stop the server.
Step 2: Containerize the Application with Docker
Now, let’s create a Dockerfile to package our application into a Docker image.
# my-devops-webapp/Dockerfile
# Stage 1: Build the application
# Use a Node.js LTS image as our base (version 20.x as of 2026 for stability)
FROM node:20-alpine AS builder
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and package-lock.json first to leverage Docker layer caching
# This step only runs if these files change, speeding up subsequent builds
COPY package*.json ./
# Install application dependencies
# The --omit=dev flag ensures we only install production dependencies
RUN npm install --omit=dev
# Copy the rest of the application code
COPY . .
# Stage 2: Create a smaller, production-ready image
# Use a minimal base image for the final production container
FROM node:20-alpine
# Set the working directory
WORKDIR /app
# Copy only the necessary files from the builder stage
# This significantly reduces the final image size
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/index.js .
COPY --from=builder /app/package.json .
# Expose the port our application listens on
EXPOSE 3000
# Define the command to run our application when the container starts
CMD ["npm", "start"]
Explanation of Dockerfile (Line by Line):
This Dockerfile uses a multi-stage build, which is a modern best practice for creating small, efficient Docker images.
FROM node:20-alpine AS builder: We start with thenode:20-alpineimage.alpineis a lightweight Linux distribution, making our base image smaller. We name this stagebuilder. Node.js v20.x is a current LTS (Long Term Support) version as of 2026.WORKDIR /app: Sets/appas the current working directory inside the container for subsequent commands.COPY package*.json ./: Copiespackage.jsonandpackage-lock.json(if it exists) into the/appdirectory. We do this separately because these files change less often than application code, allowing Docker to cache this layer.RUN npm install --omit=dev: Installs the Node.js dependencies.--omit=devensures that development dependencies (which are not needed in production) are not installed, further reducing image size.COPY . .: Copies all other files from our current host directory into the container’s/appdirectory.FROM node:20-alpine: This starts a new build stage. This image will be our final production image. Notice we usenode:20-alpineagain, but this time it’s a fresh, clean slate.WORKDIR /app: Sets the working directory for this final stage.COPY --from=builder /app/node_modules ./node_modules: This is the magic of multi-stage builds! We copy only thenode_modulesdirectory (containing our installed dependencies) from thebuilderstage to our new, clean image.COPY --from=builder /app/index.js .: Copies our application’s entry point.COPY --from=builder /app/package.json .: Copiespackage.jsonfornpm start.EXPOSE 3000: Informs Docker that the container listens on port3000at runtime. This is documentation, not a firewall rule.CMD ["npm", "start"]: Defines the default command to execute when a container starts from this image. This will run our Node.js application.
Mini-Challenge: Build and Run Your Docker Image Locally
Build the image:
docker build -t my-devops-webapp:1.0.0 .docker build: The command to build a Docker image.-t my-devops-webapp:1.0.0: Tags the image with a name (my-devops-webapp) and a version (1.0.0). This is crucial for identification..: Specifies the build context (the current directory, where theDockerfileand application files are).
What to observe: Watch Docker download base images (if not cached), install dependencies, and build the layers. It should end with a success message.
Run the container:
docker run -p 8080:3000 --name devops-app my-devops-webapp:1.0.0docker run: The command to run a Docker container.-p 8080:3000: Maps port8080on your host machine to port3000inside the container. This allows you to access the app from your host.--name devops-app: Assigns a human-readable name to your container.my-devops-webapp:1.0.0: Specifies the image to use.
Open your browser to
http://localhost:8080. You should see your “Hello, DevOps!” message. PressCtrl+Cin the terminal to stop the container.Hint: If you get an error, check your
Dockerfilefor typos and ensure Docker Desktop (or daemon) is running. Usedocker ps -ato see all containers (even stopped ones) anddocker logs <container_name_or_id>to inspect container output.
Step 3: Set Up GitHub Repository
Now that our application and Dockerfile are ready, let’s put them under version control and push them to GitHub.
Initialize Git and commit your files:
git init git add . git commit -m "feat: Initial web app and Dockerfile"Create a new repository on GitHub:
- Go to
github.com. - Click the
+sign in the top right, then “New repository.” - Name it
my-devops-webapp(or similar). - Choose “Public” or “Private.”
- Do NOT initialize with a README,
.gitignore, or license – we’ll add our own. - Click “Create repository.”
- Go to
Add remote and push: GitHub will provide commands to push an existing repository. It will look something like this (replace
YOUR_USERNAME):git remote add origin https://github.com/YOUR_USERNAME/my-devops-webapp.git git branch -M main git push -u origin mainRefresh your GitHub repository page, and you should see your
index.js,package.json, andDockerfile.
Step 4: Define the CI/CD Workflow with GitHub Actions
This is where the automation magic happens! We’ll create a .github/workflows/main.yml file to define our CI/CD pipeline.
1. Create the Workflow File
Create the directory and file:
mkdir -p .github/workflows
touch .github/workflows/main.yml
2. Add GitHub Actions Workflow Configuration
Now, open .github/workflows/main.yml and add the following content. We’ll break it down piece by piece.
# .github/workflows/main.yml
name: CI/CD Pipeline for Web App
# Trigger the workflow on pushes to the 'main' branch
on:
push:
branches:
- main
# Define the jobs that make up our pipeline
jobs:
build-and-publish:
# Run on the latest Ubuntu runner provided by GitHub
runs-on: ubuntu-latest
# These are environment variables specific to this job
env:
DOCKER_IMAGE_NAME: my-devops-webapp
DOCKER_HUB_USERNAME: ${{ secrets.DOCKER_HUB_USERNAME }} # Use a GitHub Secret
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }} # Use a GitHub Secret
steps:
- name: Checkout Repository
uses: actions/checkout@v4 # Action to check out your repository code
- name: Set up Node.js
uses: actions/setup-node@v4 # Action to set up Node.js environment
with:
node-version: '20.x' # Use Node.js 20 LTS
- name: Install Dependencies
run: npm install --omit=dev
- name: Run Tests (Placeholder)
run: npm test
- name: Log in to Docker Hub
uses: docker/login-action@v3 # Action to log in to Docker Hub
with:
username: ${{ env.DOCKER_HUB_USERNAME }}
password: ${{ env.DOCKER_HUB_TOKEN }}
- name: Build and Push Docker Image
uses: docker/build-push-action@v5 # Action to build and push Docker image
with:
context: . # Build context is the current directory
push: true # Push the image to Docker Hub
tags: |
${{ env.DOCKER_HUB_USERNAME }}/${{ env.DOCKER_IMAGE_NAME }}:latest
${{ env.DOCKER_HUB_USERNAME }}/${{ env.DOCKER_IMAGE_NAME }}:${{ github.sha }}
cache-from: type=gha # Use GitHub Actions cache for faster builds
cache-to: type=gha,mode=max
deploy:
runs-on: ubuntu-latest
needs: build-and-publish # This job depends on 'build-and-publish' completing successfully
env:
DOCKER_IMAGE_NAME: my-devops-webapp
DOCKER_HUB_USERNAME: ${{ secrets.DOCKER_HUB_USERNAME }} # Use a GitHub Secret
steps:
- name: Deploy to Remote Server via SSH
uses: appleboy/[email protected] # Action for SSH commands
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
# Navigate to a deployment directory (create if not exists)
mkdir -p /home/${{ secrets.SSH_USERNAME }}/devops-app
cd /home/${{ secrets.SSH_USERNAME }}/devops-app
# Stop and remove any existing container
docker stop ${{ env.DOCKER_IMAGE_NAME }} || true
docker rm ${{ env.DOCKER_IMAGE_NAME }} || true
# Pull the latest Docker image from Docker Hub
docker pull ${{ env.DOCKER_HUB_USERNAME }}/${{ env.DOCKER_IMAGE_NAME }}:latest
# Run the new container
docker run -d \
--name ${{ env.DOCKER_IMAGE_NAME }} \
-p 80:3000 \
${{ env.DOCKER_HUB_USERNAME }}/${{ env.DOCKER_IMAGE_NAME }}:latest
echo "Deployment complete! Check http://${{ secrets.SSH_HOST }}"
Explanation of main.yml:
name: CI/CD Pipeline for Web App: A user-friendly name for your workflow.on: push: branches: - main: This tells GitHub Actions to trigger this workflow whenever code is pushed to themainbranch.jobs:: A workflow consists of one or more jobs.build-and-publish:: This job handles building the application, creating the Docker image, and pushing it to Docker Hub.runs-on: ubuntu-latest: Specifies that this job will run on a fresh Ubuntu virtual machine hosted by GitHub.env:: Environment variables for this specific job.DOCKER_IMAGE_NAME: The name we’ll use for our Docker image.DOCKER_HUB_USERNAME,DOCKER_HUB_TOKEN: CRITICAL! These reference GitHub Secrets. You must configure these in your GitHub repository settings.
steps:: A sequence of tasks to be executed in this job.Checkout Repository: Usesactions/checkout@v4to clone your repository’s code onto the runner.Set up Node.js: Usesactions/setup-node@v4to install Node.js (version 20.x) on the runner.Install Dependencies: Runsnpm installto get the app’s dependencies.Run Tests (Placeholder): Runsnpm test. In a real project, you’d have actual tests here.Log in to Docker Hub: Usesdocker/login-action@v3to authenticate with Docker Hub using your username and token from GitHub Secrets. This is necessary before you can push images.Build and Push Docker Image: Usesdocker/build-push-action@v5to build your Docker image from theDockerfilein the current directory (.) and then pushes it to Docker Hub.push: true: Enables pushing.tags: Defines the tags for your image. We’re usinglatestand a unique tag based on the Git commit SHA (github.sha).cache-from,cache-to: Leverages GitHub Actions’ built-in caching for Docker layers, significantly speeding up subsequent builds.
deploy:: This job handles deploying the newly published Docker image to our remote server.runs-on: ubuntu-latest: Runs on an Ubuntu runner.needs: build-and-publish: This is important! It ensures thedeployjob only starts after thebuild-and-publishjob has successfully completed.env:: Environment variables for this job.steps::Deploy to Remote Server via SSH: Usesappleboy/[email protected]to execute commands on a remote server via SSH.host,username,key: These are also CRITICAL GitHub Secrets that you’ll need to configure.script: The multi-line shell script that will be executed on your remote server.mkdir -p ...; cd ...: Creates and navigates to a deployment directory.docker stop ... || true; docker rm ... || true: Stops and removes any existing container of the same name.|| trueprevents the workflow from failing if the container doesn’t exist.docker pull ...: Pulls the latest version of your Docker image from Docker Hub.docker run ...: Starts a new container from the pulled image.-d: Runs the container in detached mode (background).-p 80:3000: Maps port 80 on the host (standard HTTP port) to port 3000 inside the container. This makes your web app accessible on the standard HTTP port.--name: Assigns a name to the running container.
echo "Deployment complete!": Provides feedback in the workflow logs.
3. Configure GitHub Secrets
This is a crucial security step. You should NEVER hardcode sensitive information like Docker Hub credentials or SSH keys directly into your main.yml file. GitHub Secrets provide a secure way to store these.
On GitHub:
Go to your repository (
https://github.com/YOUR_USERNAME/my-devops-webapp).Click on “Settings” tab.
In the left sidebar, click “Secrets and variables” > “Actions”.
Click “New repository secret” and add the following secrets:
DOCKER_HUB_USERNAME: Your Docker Hub username.DOCKER_HUB_TOKEN: A Docker Hub Access Token.- How to get a Docker Hub Access Token: Go to
hub.docker.com, navigate to “Account Settings” > “Security” > “New Access Token”. Give it a descriptive name (e.g.,github-actions-token) and grant it “Read & Write” permissions. Copy the token immediately, as it’s only shown once.
- How to get a Docker Hub Access Token: Go to
SSH_HOST: The IP address or hostname of your remote Linux server (e.g.,192.168.1.100ormy-server.example.com).SSH_USERNAME: The username you use to SSH into your remote server (e.g.,ubuntu,ec2-user,root).SSH_PRIVATE_KEY: Your SSH private key content.- How to get
SSH_PRIVATE_KEY: On your local machine, if you use an SSH key for your server (e.g.,~/.ssh/id_rsaor~/.ssh/my_server_key.pem), open it with a text editor and copy its entire content, including the-----BEGIN OPENSSH PRIVATE KEY-----and-----END OPENSSH PRIVATE KEY-----lines. Ensure this key has no passphrase, or thessh-actionmight fail. For production, consider using more advanced key management solutions.
- How to get
4. Prepare Your Remote Server
Your remote Linux server needs Docker installed and configured.
SSH into your remote server:
ssh -i ~/.ssh/my_server_key.pem YOUR_USERNAME@YOUR_SSH_HOST(Replace with your actual key path, username, and host)
Install Docker Engine: For Ubuntu/Debian-based systems (most common cloud VMs):
# Update package lists sudo apt update # Install prerequisites sudo apt install ca-certificates curl gnupg lsb-release -y # Add Docker's official GPG key sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg # Set up the repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine (latest stable as of 2026 is likely 25.x or 26.x) sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y # Add your user to the 'docker' group to run Docker commands without sudo sudo usermod -aG docker $USER # You MUST log out and log back in (or reboot) for the group changes to take effect echo "Docker installed. Please log out and log back in to apply group changes."After logging out and back in, you should be able to run
docker pswithoutsudo.Ensure Port 80 is Open: If your server has a firewall (like
ufwon Ubuntu or cloud provider security groups), make sure port80(HTTP) is open to inbound traffic.For
ufw:sudo ufw allow http sudo ufw enable # if not already enabledFor cloud providers, configure your security group/firewall rules to allow inbound TCP traffic on port 80.
Step 5: Trigger Your First Pipeline
Now, commit your main.yml file and push it to GitHub:
git add .github/workflows/main.yml
git commit -m "feat: Add GitHub Actions CI/CD workflow"
git push origin main
As soon as you push, GitHub Actions will detect the change in the main branch and automatically start your workflow!
Observe the pipeline:
- Go to your GitHub repository.
- Click on the “Actions” tab.
- You’ll see your
CI/CD Pipeline for Web Appworkflow running. Click on it. - You can see the
build-and-publishanddeployjobs. Click into each job to see the detailed logs of each step as it executes.
If everything is configured correctly, both jobs should complete successfully.
Verify Deployment:
Once the deploy job finishes, open your web browser and navigate to http://YOUR_SSH_HOST (replace with your server’s IP address or hostname). You should see your “Hello, DevOps!” web application running!
Congratulations! You’ve just built and deployed your first end-to-end CI/CD pipeline!
Mini-Challenge: Update and Redeploy
Let’s see the CI/CD in action with a change.
Challenge:
- Modify the
index.jsfile to change the greeting message. For example, change<h1>Hello, DevOps! This is our first CI/CD project!</h1>to<h1>Hello again, DevOps! Our pipeline works!</h1>. - Commit this change and push it to your
mainbranch on GitHub. - Observe the GitHub Actions pipeline running automatically.
- Once the pipeline completes, verify that the updated message is displayed when you access your web application’s URL.
What to observe/learn: This exercise demonstrates the power of CI/CD. A simple code change automatically triggers the entire process, from building a new image to deploying the updated application, without any manual intervention beyond the initial commit. This dramatically speeds up development cycles and reduces human error.
Common Pitfalls & Troubleshooting
Building pipelines can be tricky. Here are some common issues and how to troubleshoot them:
GitHub Actions YAML Syntax Errors:
- Symptom: Workflow fails immediately with a parsing error, or doesn’t even appear in the Actions tab.
- Fix: YAML is very sensitive to indentation. Use a YAML linter (many online tools or IDE extensions) to check your
main.ymlfile. Ensure spaces, not tabs, are used for indentation.
Missing or Incorrect GitHub Secrets:
- Symptom:
docker loginfails with “denied: incorrect username or password”, orssh-actionfails with “Authentication failed.” - Fix: Double-check that all required secrets (
DOCKER_HUB_USERNAME,DOCKER_HUB_TOKEN,SSH_HOST,SSH_USERNAME,SSH_PRIVATE_KEY) are correctly set in your GitHub repository’s “Settings > Secrets and variables > Actions.” Ensure the Docker Hub token has “Read & Write” permissions and the SSH private key is correct and has no passphrase.
- Symptom:
Docker Build Failures:
- Symptom: The
Build and Push Docker Imagestep fails. - Fix: Examine the logs for that step carefully. Look for error messages related to
npm install(e.g., dependency not found) orCOPYcommands (e.g., file not found). It often means an issue in yourDockerfileor a missing file.
- Symptom: The
Deployment Issues (SSH Errors):
- Symptom: The
Deploy to Remote Serverstep fails, often with “Host key verification failed” or “Permission denied.” - Fix:
SSH_HOST/SSH_USERNAME: Verify these are correct. Can you manually SSH to the server with these credentials?SSH_PRIVATE_KEY: Ensure the key is correctly copied into the secret (entire content, including headers/footers) and that it has no passphrase.- Host Key: Sometimes, the
ssh-actionmight complain about the host key. For learning purposes, you can sometimes addknown_hosts_file: /dev/nullandaccept_host_keys: trueto thessh-actionconfig (forappleboy/[email protected]), but be aware this reduces security in production. A better approach is to manageknown_hostsproperly. - Docker on Server: Ensure Docker is running on your remote server and your SSH user has permissions to run
dockercommands withoutsudo.
- Symptom: The
Application Not Accessible After Deployment:
- Symptom: Deployment succeeds, but you can’t reach the web app in your browser.
- Fix:
- Firewall: Check if port
80(or whatever port you mapped on the host) is open in your server’s firewall (e.g.,ufw,firewalld) and your cloud provider’s security groups. - Container Status: SSH into your server and run
docker ps. Is yourmy-devops-webappcontainer running? If not,docker logs my-devops-webappwill show why it failed to start. - Port Mapping: Double-check the
-p 80:3000in yourdocker runcommand. Does the container’s internal port (3000in our case) match what the app expects?
- Firewall: Check if port
Summary
You’ve done it! In this chapter, you’ve moved from individual components to a fully integrated DevOps workflow.
Here are the key takeaways:
- Integrated Workflow: You’ve built an end-to-end CI/CD pipeline, connecting Git, Docker, and GitHub Actions.
- Automation: You’ve automated the process of building, packaging, and deploying a web application.
- Containerization: You used a multi-stage
Dockerfileto create an efficient Docker image for your application. - GitHub Actions: You leveraged GitHub Actions to define jobs, steps, and use actions for common tasks like checking out code, logging into Docker Hub, and deploying via SSH.
- Security Best Practices: You used GitHub Secrets to securely manage sensitive credentials, a critical aspect of any production pipeline.
- Practical Deployment: You implemented a basic deployment strategy to a remote Linux server.
This project is a foundational step in your DevOps journey. It showcases how continuous integration and continuous delivery streamline development and operations.
What’s Next?
In upcoming chapters, we’ll expand on this foundation:
- More Robust Testing: Integrating actual unit, integration, and end-to-end tests into the pipeline.
- Advanced Deployment: Exploring blue/green deployments, canary releases, and rolling updates.
- Orchestration with Kubernetes: Deploying our containerized applications to a Kubernetes cluster for scalability and resilience.
- Monitoring & Logging: Adding tools to observe the health and performance of our deployed applications.
Keep experimenting, keep learning, and remember: automation is your friend!
References
- GitHub Actions Documentation
- Docker Documentation
- Node.js Official Website
- Docker Hub
- appleboy/ssh-action GitHub Repository
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.