search
HomeOperation and MaintenanceLinux Operation and MaintenanceHow to use Docker for continuous integration and continuous deployment

How to use Docker for continuous integration and continuous deployment

How to use Docker for continuous integration and continuous deployment

With the rapid development of software development, continuous integration and continuous deployment have become indispensable in the modern software development process a part of. As a containerization platform, Docker can greatly simplify the process of continuous integration and continuous deployment. This article will introduce how to use Docker for continuous integration and continuous deployment, and provide specific code examples.

1. Continuous Integration

Continuous integration refers to frequently merging developers’ code modifications into a shared repository, and frequently building and testing them. Using Docker for continuous integration can simplify the environment configuration and build process and improve development efficiency.

  1. Create a Dockerfile

Dockerfile is a script file used to build a Docker image. Create a file named Dockerfile in the project root directory and add the following code example:

# 使用官方的 Node.js 镜像作为基础镜像
FROM node:alpine

# 设置工作目录
WORKDIR /app

# 复制 package.json 和 package-lock.json 到工作目录
COPY package*.json ./

# 安装项目依赖
RUN npm install

# 将项目文件复制到工作目录
COPY . .

# 暴露端口
EXPOSE 3000

# 运行应用
CMD ["npm", "start"]

This Dockerfile file defines a Node.js-based image and installs the project's dependencies into the image, and Copy the application's files and code into the working directory. Finally, expose the port and run the application.

  1. Build Docker image

In the project root directory, use the following command to build the Docker image:

docker build -t my-app .

This command will be based on the definition of the Dockerfile file, Build an image named my-app.

  1. Containerized testing

Create a file named docker-compose.test.yml in the project root directory and add the following code example:

version: '3'
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    depends_on:
      - db
    command: npm run test
  db:
    image: mongo

This docker-compose.test.yml file defines two services, one is the app service, which is our application service, and the other is the db service, which is our database service. This file instructs Docker to run two services and run test commands in the app service.

In the project root directory, use the following command to run the test container:

docker-compose -f docker-compose.test.yml up

This command will start the app and db services and run the test command.

  1. Automated Continuous Integration

The purpose of continuous integration is to quickly and frequently merge developers' code changes into the main code and perform automated builds and tests. You can use tools such as Jenkins and GitLab CI to implement automated continuous integration.

Take Jenkins as an example, create a file named Jenkinsfile, and add the following code example:

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                sh 'docker build -t my-app .'
            }
        }
        stage('Test') {
            steps {
                sh 'docker-compose -f docker-compose.test.yml up'
            }
        }
    }
}

This Jenkinsfile file defines a Jenkins pipeline that contains two stages: build and test. In the build phase, execute the docker build command to build the Docker image, and in the test phase execute the docker-compose command to start the test container.

Add the Jenkinsfile file to the root directory of the project and configure the Jenkins server for automated continuous integration.

2. Continuous deployment

Continuous deployment refers to automatically deploying code to the production environment after completing continuous integration. Using Docker for continuous deployment can greatly simplify the deployment process.

  1. Create a Docker Image

Using the Dockerfile created in the previous steps, build a Docker image that contains the application code.

  1. Write the docker-compose.yml file

Create a file named docker-compose.yml in the project root directory and add the following code example:

version: '3'
services:
  app:
    image: my-app:latest
    restart: always
    ports:
      - 80:3000

This docker-compose.yml file instructs Docker to run an app service and use the my-app image just built as its base image. In addition, port mapping and other services can be configured.

  1. Deploy the application

Use the following command to deploy the application in the production environment:

docker-compose up -d

This command will start the app service in the background and expose it On port 80 of the host.

The above are the specific steps and code examples on how to use Docker for continuous integration and continuous deployment. Through Docker, you can simplify the environment configuration and deployment process, improve development efficiency and application reliability.

The above is the detailed content of How to use Docker for continuous integration and continuous deployment. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Linux: How to Enter Recovery Mode (and Maintenance)Linux: How to Enter Recovery Mode (and Maintenance)Apr 18, 2025 am 12:05 AM

The steps to enter Linux recovery mode are: 1. Restart the system and press the specific key to enter the GRUB menu; 2. Select the option with (recoverymode); 3. Select the operation in the recovery mode menu, such as fsck or root. Recovery mode allows you to start the system in single-user mode, perform file system checks and repairs, edit configuration files, and other operations to help solve system problems.

Linux's Essential Components: Explained for BeginnersLinux's Essential Components: Explained for BeginnersApr 17, 2025 am 12:08 AM

The core components of Linux include the kernel, file system, shell and common tools. 1. The kernel manages hardware resources and provides basic services. 2. The file system organizes and stores data. 3. Shell is the interface for users to interact with the system. 4. Common tools help complete daily tasks.

Linux: A Look at Its Fundamental StructureLinux: A Look at Its Fundamental StructureApr 16, 2025 am 12:01 AM

The basic structure of Linux includes the kernel, file system, and shell. 1) Kernel management hardware resources and use uname-r to view the version. 2) The EXT4 file system supports large files and logs and is created using mkfs.ext4. 3) Shell provides command line interaction such as Bash, and lists files using ls-l.

Linux Operations: System Administration and MaintenanceLinux Operations: System Administration and MaintenanceApr 15, 2025 am 12:10 AM

The key steps in Linux system management and maintenance include: 1) Master the basic knowledge, such as file system structure and user management; 2) Carry out system monitoring and resource management, use top, htop and other tools; 3) Use system logs to troubleshoot, use journalctl and other tools; 4) Write automated scripts and task scheduling, use cron tools; 5) implement security management and protection, configure firewalls through iptables; 6) Carry out performance optimization and best practices, adjust kernel parameters and develop good habits.

Understanding Linux's Maintenance Mode: The EssentialsUnderstanding Linux's Maintenance Mode: The EssentialsApr 14, 2025 am 12:04 AM

Linux maintenance mode is entered by adding init=/bin/bash or single parameters at startup. 1. Enter maintenance mode: Edit the GRUB menu and add startup parameters. 2. Remount the file system to read and write mode: mount-oremount,rw/. 3. Repair the file system: Use the fsck command, such as fsck/dev/sda1. 4. Back up the data and operate with caution to avoid data loss.

How Debian improves Hadoop data processing speedHow Debian improves Hadoop data processing speedApr 13, 2025 am 11:54 AM

This article discusses how to improve Hadoop data processing efficiency on Debian systems. Optimization strategies cover hardware upgrades, operating system parameter adjustments, Hadoop configuration modifications, and the use of efficient algorithms and tools. 1. Hardware resource strengthening ensures that all nodes have consistent hardware configurations, especially paying attention to CPU, memory and network equipment performance. Choosing high-performance hardware components is essential to improve overall processing speed. 2. Operating system tunes file descriptors and network connections: Modify the /etc/security/limits.conf file to increase the upper limit of file descriptors and network connections allowed to be opened at the same time by the system. JVM parameter adjustment: Adjust in hadoop-env.sh file

How to learn Debian syslogHow to learn Debian syslogApr 13, 2025 am 11:51 AM

This guide will guide you to learn how to use Syslog in Debian systems. Syslog is a key service in Linux systems for logging system and application log messages. It helps administrators monitor and analyze system activity to quickly identify and resolve problems. 1. Basic knowledge of Syslog The core functions of Syslog include: centrally collecting and managing log messages; supporting multiple log output formats and target locations (such as files or networks); providing real-time log viewing and filtering functions. 2. Install and configure Syslog (using Rsyslog) The Debian system uses Rsyslog by default. You can install it with the following command: sudoaptupdatesud

How to choose Hadoop version in DebianHow to choose Hadoop version in DebianApr 13, 2025 am 11:48 AM

When choosing a Hadoop version suitable for Debian system, the following key factors need to be considered: 1. Stability and long-term support: For users who pursue stability and security, it is recommended to choose a Debian stable version, such as Debian11 (Bullseye). This version has been fully tested and has a support cycle of up to five years, which can ensure the stable operation of the system. 2. Package update speed: If you need to use the latest Hadoop features and features, you can consider Debian's unstable version (Sid). However, it should be noted that unstable versions may have compatibility issues and stability risks. 3. Community support and resources: Debian has huge community support, which can provide rich documentation and

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.