Stop the docker container and the data will not be lost. When the docker container stops and exits, it will be in the exited state. The data in it will not be lost and can be viewed through "docker ps -a"; only when the container is deleted, the data will be cleared along with the deletion of the container.
The operating environment of this tutorial: linux5.9.8 system, docker-1.13.1 version, Dell G3 computer.
Stop the docker container and the data will not be lost.
When the docker container stops and exits, it will be in the exited state, which is equivalent to shutting down the virtual machine, so there will be no data loss.
At this time, you can view it through docker ps -a, and you can also start it through docker start. Only deleting the container will clear the data.
Only after docker rm, deleting the container will clear the data.
Create a container and then delete the container. The data will also be deleted when the container is deleted.
How to delete the container without deleting the data when creating the container dockerrun-vhost_dir:containere_dir
This can solve your problem!
Extended knowledge: When the container is restarted, the log or database data generated during the running of the container will be cleared.
Solution:
Docker can permanently store data by mounting the host disk directory.
1. Execute Docker Volume when creating a container
Use the docker run command to run a Docker container, use the image ubuntu/nginx, and mount the local directory /tmp/source to the container directory /tmp /destination
docker run -itd --volume /tmp/source:/tmp/destination --name test ubuntu/nginx bash
Created a Docker container based on the ubuntu/nginx image.
The name of the specified container is test, specified by the ––name option.
Docker Volume is specified by the ––volume (can be abbreviated as -v) option. The /tmp/source directory of the host corresponds to the /tmp/destination directory in the container.
2. View Docker Volume
Use the docker inspect command to view the detailed information of the Docker container:
docker inspect --format=’{{json .Mounts}}'test | python -m json.tool[{“Destination”: “/tmp/destination”,“Mode”: “”,“Propagation”: “”,“RW”: true,“Source”: “/tmp/source”,“Type”: “bind”}]
Use the --format option to selectively view the required containers information. .Mount is the Docker Volume information of the container.
python -m json.tool can format and display the output json string.
Source represents the directory on the host, that is, /tmp/source.
Destination is the directory in the container, that is, /tmp/destination.
3. Local files can be synchronized to the container
Create a new hello.txt file in the local/tmp/source directory
touch /tmp/source/hello.txtls /tmp/source/hello.txt
hello.txt file in the container/tmp Visible in the /destination/ directory
Use the docker exec command to execute commands in the container.
docker exectest ls /tmp/destination/hello.txt
So modifications to the directory /tmp/source/ on the host machine can be synchronized to the container directory /tmp/destination/.
4. Container files can be synchronized to the host machine
Create a new world.txt file in the container/tmp/destination directory
docker exec test touch /tmp/destination/world.txtdocker exec test ls /tmp/destination/hello.txtworld.txt
The world.txt file is in the host machine/tmp Visible in the /source/ directory
ls /tmp/source/hello.txt world.txt
Recommended learning: "docker video tutorial"
The above is the detailed content of Will data be lost if you stop the docker container?. For more information, please follow other related articles on the PHP Chinese website!

The ways Docker can simplify development and operation and maintenance processes include: 1) providing a consistent environment to ensure that applications run consistently in different environments; 2) optimizing application deployment through Dockerfile and image building; 3) using DockerCompose to manage multiple services. Docker implements these functions through containerization technology, but during use, you need to pay attention to common problems such as image construction, container startup and network configuration, and improve performance through image optimization and resource management.

The relationship between Docker and Kubernetes is: Docker is used to package applications, and Kubernetes is used to orchestrate and manage containers. 1.Docker simplifies application packaging and distribution through container technology. 2. Kubernetes manages containers to ensure high availability and scalability. They are used in combination to improve application deployment and management efficiency.

Docker solves the problem of consistency in software running in different environments through container technology. Its development history has promoted the evolution of the cloud computing ecosystem from 2013 to the present. Docker uses Linux kernel technology to achieve process isolation and resource limitation, improving the portability of applications. In development and deployment, Docker improves resource utilization and deployment speed, supports DevOps and microservice architectures, but also faces challenges in image management, security and container orchestration.

Docker and virtual machines have their own advantages and disadvantages, and the choice should be based on specific needs. 1.Docker is lightweight and fast, suitable for microservices and CI/CD, fast startup and low resource utilization. 2. Virtual machines provide high isolation and multi-operating system support, but they consume a lot of resources and slow startup.

The core concept of Docker architecture is containers and mirrors: 1. Mirrors are the blueprint of containers, including applications and their dependencies. 2. Containers are running instances of images and are created based on images. 3. The mirror consists of multiple read-only layers, and the writable layer is added when the container is running. 4. Implement resource isolation and management through Linux namespace and control groups.

Docker simplifies the construction, deployment and operation of applications through containerization technology. 1) Docker is an open source platform that uses container technology to package applications and their dependencies to ensure cross-environment consistency. 2) Mirrors and containers are the core of Docker. The mirror is the executable package of the application and the container is the running instance of the image. 3) Basic usage of Docker is like running an Nginx server, and advanced usage is like using DockerCompose to manage multi-container applications. 4) Common errors include image download failure and container startup failure, and debugging skills include viewing logs and checking ports. 5) Performance optimization and best practices include mirror optimization, resource management and security improvement.

The steps to deploy containerized applications using Kubernetes and Docker include: 1. Build a Docker image, define the application image using Dockerfile and push it to DockerHub. 2. Create Deployment and Service in Kubernetes to manage and expose applications. 3. Use HorizontalPodAutoscaler to achieve dynamic scaling. 4. Debug common problems through kubectl command. 5. Optimize performance, define resource limitations and requests, and manage configurations using Helm.

Docker is an open source platform for developing, packaging and running applications, and through containerization technology, solving the consistency of applications in different environments. 1. Build the image: Define the application environment and dependencies through the Dockerfile and build it using the dockerbuild command. 2. Run the container: Use the dockerrun command to start the container from the mirror. 3. Manage containers: manage container life cycle through dockerps, dockerstop, dockerrm and other commands.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Zend Studio 13.0.1
Powerful PHP integrated development environment

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),
