Home >Backend Development >PHP Tutorial >Getting started with docker_PHP tutorial

Getting started with docker_PHP tutorial

WBOY
WBOYOriginal
2016-07-12 09:05:44885browse

Getting started with docker

This article is original to fireaxe and is released under GPL. You can freely copy and reprint it. However, please keep the integrity of the document when reprinting, and indicate the original author and original link. The content may be used for any purpose, but no guarantee is made regarding the consequences caused by the use of the content.

Author: haoqiang1531@outlook.com Blog: fireaxe.blog.chinaunix.net
1. What is docker? In principle, docker is a technology derived from lxc and AUFS. Q: What is lxc? A: lxc is a Linux kernel container, which is equivalent to a Linux lightweight virtual machine. Compared with virtual machines with instruction set virtualization such as virtual box and vmware, its advantage is that it utilizes the kernel of the host system. Therefore, lxc can be regarded as a virtual machine that shares the kernel. A blessing and a curse, lxc's shortcoming is also due to the use of the host's kernel, so the container must also run a Linux system. If you need to use a non-linux system, you can only use virtual machines such as vmware. Q: In what scenarios do I need to use docker? It is best to discuss this issue after reading the following content, but based on the importance of this issue, I decided to move it to the front. (Generally speaking, most of them only read the first three paragraphs...) 1) Cloud deployment (I have never played this part, I can only hear it from hearsay). In the past, virtual machines were used. After having docker, some questions about what to use Applications that are not required by the operating system are immediately moved over. Of course, lxc will also work, but as long as cloud platforms are used, they are applied on a large scale. However, lxc does not have the ease of deployment and migration required for large-scale applications. 2) The first CI platform within the company is the construction of the CI platform. Docker technology can be used to separate the tools of the CI platform and improve the flexibility of upgrades. At the same time, the tool is backed up by using image. (There are other methods for data backup) The second is the packaging of the test environment. Using dockerfile, the latest version is automatically used to synthesize a testable environment, which can eliminate environmental interference. At the same time, once the test is completed, the image can be released to the outside world to avoid various configuration problems caused by customers reinstalling the software. (In the past, you had to adapt to various environments, but now it’s better. You can release it together with the environment, and you don’t need to test multiple environments.) 3) Quickly set up the development environment. Use docker to implement the development environment. Once anyone has a new computer and wants to set up a development environment, Just pull an image and it will be done in minutes! !
Q: Why can lxc achieve isolation? A: In fact, the principle of Linux startup is to start the kernel first, and then the kernel starts the user space. Then it is not impossible for the kernel to start multiple user spaces. All that is required is isolation within the kernel. This is also the reason why lxc needs to be implemented in the kernel. User space does not necessarily need to know that there are other user spaces besides itself)
Q: Why can different distributions run in a container on the same system at the same time? A: The main difference between Linux distributions is that the user space and kernel are the same, which also provides convenience for different distributions to run at the same time. lxc only provides the kernel for the container, and then constructs different user spaces according to the needs of different distributions.
Q: Since lxc already provides containers, why not use lxc directly? A: In fact, docker is not used everywhere, it depends on the application scenario. lxc is essentially a virtual machine technology. If my usual work requires different distributions or different versions of a unified distribution, then using lxc is completely enough. Docker is more like segmentation of services. Today's systems are becoming more and more complex, and there will be various mutual interferences when running on the same machine. At the same time, it is not conducive to migration. If you want to migrate a certain service to another machine, you will encounter various environment configuration and dependency problems. This can also be done with lxc, but since each container of lxc only provides kernel support, the user mode environment needs to be reconfigured. If there are three containers that all require apache server, I need to install it once in each container, which is obviously a waste. Or if different development environments require the gcc compiler, multiple copies must be installed. So some people started to pay attention to reusing part of the user space.
Q: How to achieve user space level reuse? A: As mentioned at the beginning, docker is a technology based on lxc and AUFS. AUFS allows users to reuse part of userspace. User space is essentially a file system. Therefore, the reuse of user space can be seen as the reuse of file systems. AUFS can stack multiple directories and can set the read and write attributes of each directory independently. Simply put, lxc generates a file system for each container that is completely isolated from the outside world, so that from the perspective of user space, it is the only operating system; AUFS implements stacking on this basis, allowing multiple containers to share part of the file system. Q: What is the significance of file system sharing implemented by AUFS? A: For example, I want to use two containers, one for mysql server and one for redmine server. The operating system requirement is ubuntu. On lxc, I need to construct two containers containing ubuntu respectively, and then install two software respectively. On docker, you can first construct an ubuntu container, and then construct two containers based on this container to store mysql server and server. The Ubuntu part is read-only for its derived container. Then if one day the user finds that he needs to add several applications based on mysql, he can easily derive them from the mysql server container. In this way, reuse is achieved through derivation. For more details, please refer to: 10 pictures to help you deeply understand Docker containers and images (http://dockone.io/article/783) Q: What is the difference between image and container? A: In fact, image is container, and image is equivalent to a read-only copy of container. If the child container directly reuses the parent container, then when the child container modifies the content of the parent container, other child containers of the parent container will also be affected. Therefore, the parent container is made into a read-only image so that its child containers cannot modify it. On the other hand, a container is dynamic, similar to a set of codes managed in git, and an image is equivalent to a commit (the command to generate an image from a container in docker happens to be a commit). The container can only be used by the developer himself. Only after it is committed to generate an image, others can pull out the branch based on this for parallel development. Of course, after commit, the image only exists locally. If you want multiple people to collaborate on development, you need to push the image to the server through the "docker push" command. Docker's server is called docker registry.
Q: What is a dockerfile? A: Dockerfile is a script that generates images and is commonly used for deployment in production environments. Example 1: I developed a set of software and need to release a docker image every week. The normal process is to pull a basic image, then download and install my software, and finally commit it to a new image for release. With the dockerfile, I can automate this process. I only need to run a docker build command each time and use the written dockerfile to generate a new image. Example 2: A production environment depends on multiple components, but these components are constantly updated. If you use image, you need to package and resend the image after updating each time. Having a dockerfile is much better. Every time you need to update the environment, you only need to re-run the dockerfile once, and it will automatically download and install the latest components according to the command. In summary, we are emphasizing the role of dockerfile: a scripting language that automates the packaging process of the environment. It doesn't play a big role in the development process.
2. Common commands
Command Explanation
create [--name container-id] Create a container based on the specified image
start [-ti/-d/-v] Start the specified container
-ti to create a virtual terminal Disease connection
-d runs in the background and does not exit after the command is completed
-v maps the host directory to container
run [--name container-id] 'docker create' 'docker start'
ps [-a] running container
-a all containers
iamges [-a] All images
-a All images and the layers that make up the image
history An image and the layers that make up the image
stop container shutdown
pause container pauses
rm Delete container
commit Create a new image based on container
rmi Delete image
pull and sync the specified image to local
push merge the image into docker hub
login login docker hub

3. For the docker routine, please refer to the link below: http://docs.docker.com/mac/started/
4. Data volume & data volume container (data volume & data volume container) 1 ) The significance of data volumes and data volume containers Data volumes: realize the separation of data and applications. No data is included when the backup is actually applied. Data is backed up individually. Because data and application backup usually require different strategies.
Data volume container The data volume container is used to isolate the actual application to the host. In this way, when the location of the data on the host changes, only the data volume container needs to be modified, and other application containers do not need to be modified.
2) Use data volumes to create a container containing one data volume: $ docker run -v /data/path:/mount/path:ro --name dbdata ubuntu /bin/bash Create a container containing two data volumes: $ docker run -d -v /data/path1:/mount/path1:ro -v /data/path2:/mount/path2:ro--name dbdata ubuntu /bin/bash $ docker run -ti --volumes-from dbdata --name app ubuntu
Use -v to mount the host directory "/data/path" to the "/mount/path" directory of container dbdata. dbdata becomes the data volume container. The actual application container app is generated based on dbdata. Use "--volumes-from dbdata" to obtain the read and write permissions of the data volume container dbdata.

This article is original by fireaxe and is released under GPL. You can freely copy and reprint it. However, please keep the integrity of the document when reprinting, and indicate the original author and original link. The content may be used for any purpose, but no guarantee is made regarding the consequences caused by the use of the content.

Author: haoqiang1531@outlook.com Blog: fireaxe.blog.chinaunix.net

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/1068090.htmlTechArticleDocker Getting Started This article is original by fireaxe and is released using GPL. You can freely copy and reprint it. However, please keep the integrity of the document when reprinting, and indicate the original author and original link. The content can be used as you wish...
Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn