The difference between arg and env in docker is: arg exists during build and can be used as a variable in the Dockerfile, while env is an environment variable after the container is built and cannot be used as a parameter in the Dockerfile use.
The operating environment of this tutorial: linux7.3 system, docker-1.13.1 version, Dell G3 computer.
What is the difference between arg and env in docker
When you use docker-compoe to build an image, you will feel that the functions of ARG and ENV are very similar, but these two existences definitely have their own advantages. The reason
The timing of their effect
arg exists during build and can be used as a variable in the Dockerfile
-
env is the environment variable after the container is built and cannot be used as a parameter in the Dockerfile
It can be seen from here that ARG is specially designed for building images
Take a specific example
# Dockerfile FROM redis:3.2-alpine LABEL maintainer="GPF <5173180@qq.com>" ARG REDIS_SET_PASSWORD=developer ENV REDIS_PASSWORD ${REDIS_SET_PASSWORD} VOLUME /data EXPOSE 6379 CMD ["sh", "-c", "exec redis-server --requirepass \"$REDIS_PASSWORD\""]
This is a file for building redis. There is such a sentence in the middle
ARG REDIS_SET_PASSWORD=developer ENV REDIS_PASSWORD ${REDIS_SET_PASSWORD}
It serves the sentence
CMD ["sh", "-c", "exec redis-server --requirepass \"$REDIS_PASSWORD\""]
. This sentence The password is set when starting redis, because when CMD is executed, it means that the container has been successfully built and run. At this time, CMD executes the commands in the container in the container, so the variables in CMD are environment variables instead of variables in the Dockerfile, so the value in ARG needs to be assigned to ENV during construction
Another example of using ARG
FROM nginx:1.13.1-alpine LABEL maintainer="GPF <5173180@qq.com>" #https://yeasy.gitbooks.io/docker_practice/content/image/build.html RUN mkdir -p /etc/nginx/cert \ && mkdir -p /etc/nginx/conf.d \ && mkdir -p /etc/nginx/sites COPY ./nginx.conf /etc/ngixn/nginx.conf COPY ./conf.d/ /etc/nginx/conf.d/ COPY ./cert/ /etc/nginx/cert/ COPY ./sites /etc/nginx/sites/ ARG PHP_UPSTREAM_CONTAINER=php-fpm ARG PHP_UPSTREAM_PORT=9000 RUN echo "upstream php-upstream { server ${PHP_UPSTREAM_CONTAINER}:${PHP_UPSTREAM_PORT}; }" > /etc/nginx/conf.d/upstream.conf VOLUME ["/var/log/nginx", "/var/www"] WORKDIR /usr/share/nginx/html
Here we just use ARG
ARG PHP_UPSTREAM_CONTAINER=php-fpm ARG PHP_UPSTREAM_PORT=9000 RUN echo "upstream php-upstream { server ${PHP_UPSTREAM_CONTAINER}:${PHP_UPSTREAM_PORT}; }" > /etc/nginx/conf.d/upstream.conf
The variables here are ARG instead of ENV, because this command is run in the Dockerfile. It is suitable to use ARG for temporary use of variables without the need to store the value of the environment variable
Recommended learning: "docker video tutorial"
The above is the detailed content of What is the difference between arg and env in docker. For more information, please follow other related articles on the PHP Chinese website!

Docker and virtual machines have their own advantages and disadvantages, and the choice should be based on specific needs. 1.Docker is lightweight and fast, suitable for microservices and CI/CD, fast startup and low resource utilization. 2. Virtual machines provide high isolation and multi-operating system support, but they consume a lot of resources and slow startup.

The core concept of Docker architecture is containers and mirrors: 1. Mirrors are the blueprint of containers, including applications and their dependencies. 2. Containers are running instances of images and are created based on images. 3. The mirror consists of multiple read-only layers, and the writable layer is added when the container is running. 4. Implement resource isolation and management through Linux namespace and control groups.

Docker simplifies the construction, deployment and operation of applications through containerization technology. 1) Docker is an open source platform that uses container technology to package applications and their dependencies to ensure cross-environment consistency. 2) Mirrors and containers are the core of Docker. The mirror is the executable package of the application and the container is the running instance of the image. 3) Basic usage of Docker is like running an Nginx server, and advanced usage is like using DockerCompose to manage multi-container applications. 4) Common errors include image download failure and container startup failure, and debugging skills include viewing logs and checking ports. 5) Performance optimization and best practices include mirror optimization, resource management and security improvement.

The steps to deploy containerized applications using Kubernetes and Docker include: 1. Build a Docker image, define the application image using Dockerfile and push it to DockerHub. 2. Create Deployment and Service in Kubernetes to manage and expose applications. 3. Use HorizontalPodAutoscaler to achieve dynamic scaling. 4. Debug common problems through kubectl command. 5. Optimize performance, define resource limitations and requests, and manage configurations using Helm.

Docker is an open source platform for developing, packaging and running applications, and through containerization technology, solving the consistency of applications in different environments. 1. Build the image: Define the application environment and dependencies through the Dockerfile and build it using the dockerbuild command. 2. Run the container: Use the dockerrun command to start the container from the mirror. 3. Manage containers: manage container life cycle through dockerps, dockerstop, dockerrm and other commands.

How to build portable applications with Docker and Linux? First, use Dockerfile to containerize the application, and then manage and deploy the container in a Linux environment. 1) Write a Dockerfile and package the application and its dependencies into a mirror. 2) Build and run containers on Linux using dockerbuild and dockerrun commands. 3) Manage multi-container applications through DockerCompose and define service dependencies. 4) Optimize the image size and resource configuration, enhance security, and improve application performance and portability.

Docker and Kubernetes improve application deployment and management efficiency through container orchestration. 1.Docker builds images through Dockerfile and runs containers to ensure application consistency. 2. Kubernetes manages containers through Pod, Deployment and Service to achieve automated deployment and expansion.

Docker and Kubernetes are leaders in containerization and orchestration. Docker focuses on container lifecycle management and is suitable for small projects; Kubernetes is good at container orchestration and is suitable for large-scale production environments. The combination of the two can improve development and deployment efficiency.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

SublimeText3 Chinese version
Chinese version, very easy to use

SublimeText3 Linux new version
SublimeText3 Linux latest version
