Getting started with docker
This article is original to fireaxe and is released under GPL. You can freely copy and reprint it. However, please keep the integrity of the document when reprinting, and indicate the original author and original link. The content may be used for any purpose, but no guarantee is made regarding the consequences caused by the use of the content.
Author: haoqiang1531@outlook.com Blog: fireaxe.blog.chinaunix.net1. What is docker? In principle, docker is a technology derived from lxc and AUFS. Q: What is lxc? A: lxc is a Linux kernel container, which is equivalent to a Linux lightweight virtual machine. Compared with virtual machines with instruction set virtualization such as virtual box and vmware, its advantage is that it utilizes the kernel of the host system. Therefore, lxc can be regarded as a virtual machine that shares the kernel. A blessing and a curse, lxc's shortcoming is also due to the use of the host's kernel, so the container must also run a Linux system. If you need to use a non-linux system, you can only use virtual machines such as vmware. Q: In what scenarios do I need to use docker? It is best to discuss this issue after reading the following content, but based on the importance of this issue, I decided to move it to the front. (Generally speaking, most of them only read the first three paragraphs...) 1) Cloud deployment (I have never played this part, I can only hear it from hearsay). In the past, virtual machines were used. After having docker, some questions about what to use Applications that are not required by the operating system are immediately moved over. Of course, lxc will also work, but as long as cloud platforms are used, they are applied on a large scale. However, lxc does not have the ease of deployment and migration required for large-scale applications. 2) The first CI platform within the company is the construction of the CI platform. Docker technology can be used to separate the tools of the CI platform and improve the flexibility of upgrades. At the same time, the tool is backed up by using image. (There are other methods for data backup) The second is the packaging of the test environment. Using dockerfile, the latest version is automatically used to synthesize a testable environment, which can eliminate environmental interference. At the same time, once the test is completed, the image can be released to the outside world to avoid various configuration problems caused by customers reinstalling the software. (In the past, you had to adapt to various environments, but now it’s better. You can release it together with the environment, and you don’t need to test multiple environments.) 3) Quickly set up the development environment. Use docker to implement the development environment. Once anyone has a new computer and wants to set up a development environment, Just pull an image and it will be done in minutes! !
Q: Why can lxc achieve isolation? A: In fact, the principle of Linux startup is to start the kernel first, and then the kernel starts the user space. Then it is not impossible for the kernel to start multiple user spaces. All that is required is isolation within the kernel. This is also the reason why lxc needs to be implemented in the kernel. User space does not necessarily need to know that there are other user spaces besides itself)
Q: Why can different distributions run in a container on the same system at the same time? A: The main difference between Linux distributions is that the user space and kernel are the same, which also provides convenience for different distributions to run at the same time. lxc only provides the kernel for the container, and then constructs different user spaces according to the needs of different distributions.
Q: Since lxc already provides containers, why not use lxc directly? A: In fact, docker is not used everywhere, it depends on the application scenario. lxc is essentially a virtual machine technology. If my usual work requires different distributions or different versions of a unified distribution, then using lxc is completely enough. Docker is more like segmentation of services. Today's systems are becoming more and more complex, and there will be various mutual interferences when running on the same machine. At the same time, it is not conducive to migration. If you want to migrate a certain service to another machine, you will encounter various environment configuration and dependency problems. This can also be done with lxc, but since each container of lxc only provides kernel support, the user mode environment needs to be reconfigured. If there are three containers that all require apache server, I need to install it once in each container, which is obviously a waste. Or if different development environments require the gcc compiler, multiple copies must be installed. So some people started to pay attention to reusing part of the user space.
Q: How to achieve user space level reuse? A: As mentioned at the beginning, docker is a technology based on lxc and AUFS. AUFS allows users to reuse part of userspace. User space is essentially a file system. Therefore, the reuse of user space can be seen as the reuse of file systems. AUFS can stack multiple directories and can set the read and write attributes of each directory independently. Simply put, lxc generates a file system for each container that is completely isolated from the outside world, so that from the perspective of user space, it is the only operating system; AUFS implements stacking on this basis, allowing multiple containers to share part of the file system. Q: What is the significance of file system sharing implemented by AUFS? A: For example, I want to use two containers, one for mysql server and one for redmine server. The operating system requirement is ubuntu. On lxc, I need to construct two containers containing ubuntu respectively, and then install two software respectively. On docker, you can first construct an ubuntu container, and then construct two containers based on this container to store mysql server and server. The Ubuntu part is read-only for its derived container. Then if one day the user finds that he needs to add several applications based on mysql, he can easily derive them from the mysql server container. In this way, reuse is achieved through derivation. For more details, please refer to: 10 pictures to help you deeply understand Docker containers and images (http://dockone.io/article/783) Q: What is the difference between image and container? A: In fact, image is container, and image is equivalent to a read-only copy of container. If the child container directly reuses the parent container, then when the child container modifies the content of the parent container, other child containers of the parent container will also be affected. Therefore, the parent container is made into a read-only image so that its child containers cannot modify it. On the other hand, a container is dynamic, similar to a set of codes managed in git, and an image is equivalent to a commit (the command to generate an image from a container in docker happens to be a commit). The container can only be used by the developer himself. Only after it is committed to generate an image, others can pull out the branch based on this for parallel development. Of course, after commit, the image only exists locally. If you want multiple people to collaborate on development, you need to push the image to the server through the "docker push" command. Docker's server is called docker registry.
Q: What is a dockerfile? A: Dockerfile is a script that generates images and is commonly used for deployment in production environments. Example 1: I developed a set of software and need to release a docker image every week. The normal process is to pull a basic image, then download and install my software, and finally commit it to a new image for release. With the dockerfile, I can automate this process. I only need to run a docker build command each time and use the written dockerfile to generate a new image. Example 2: A production environment depends on multiple components, but these components are constantly updated. If you use image, you need to package and resend the image after updating each time. Having a dockerfile is much better. Every time you need to update the environment, you only need to re-run the dockerfile once, and it will automatically download and install the latest components according to the command. In summary, we are emphasizing the role of dockerfile: a scripting language that automates the packaging process of the environment. It doesn't play a big role in the development process.
2. Common commands
Command | Explanation |
create [--name container-id] | Create a container based on the specified image |
start [-ti/-d/-v] | Start the specified container -ti to create a virtual terminal Disease connection -d runs in the background and does not exit after the command is completed -v maps the host directory to container |
run [--name container-id] | 'docker create' 'docker start' |
ps [-a] | running container -a all containers |
iamges [-a] | All images -a All images and the layers that make up the image |
history | An image and the layers that make up the image |
stop | container shutdown |
pause | container pauses |
rm | Delete container |
commit | Create a new image based on container |
rmi | Delete image |
pull | and sync the specified image to local |
push | merge the image into docker hub |
login | login docker hub |
3. For the docker routine, please refer to the link below: http://docs.docker.com/mac/started/
4. Data volume & data volume container (data volume & data volume container) 1 ) The significance of data volumes and data volume containers Data volumes: realize the separation of data and applications. No data is included when the backup is actually applied. Data is backed up individually. Because data and application backup usually require different strategies.
Data volume container The data volume container is used to isolate the actual application to the host. In this way, when the location of the data on the host changes, only the data volume container needs to be modified, and other application containers do not need to be modified.
2) Use data volumes to create a container containing one data volume: $ docker run -v /data/path:/mount/path:ro --name dbdata ubuntu /bin/bash Create a container containing two data volumes: $ docker run -d -v /data/path1:/mount/path1:ro -v /data/path2:/mount/path2:ro--name dbdata ubuntu /bin/bash $ docker run -ti --volumes-from dbdata --name app ubuntu
Use -v to mount the host directory "/data/path" to the "/mount/path" directory of container dbdata. dbdata becomes the data volume container. The actual application container app is generated based on dbdata. Use "--volumes-from dbdata" to obtain the read and write permissions of the data volume container dbdata.
This article is original by fireaxe and is released under GPL. You can freely copy and reprint it. However, please keep the integrity of the document when reprinting, and indicate the original author and original link. The content may be used for any purpose, but no guarantee is made regarding the consequences caused by the use of the content.
Author: haoqiang1531@outlook.com Blog: fireaxe.blog.chinaunix.net
What’s still popular is the ease of use, flexibility and a strong ecosystem. 1) Ease of use and simple syntax make it the first choice for beginners. 2) Closely integrated with web development, excellent interaction with HTTP requests and database. 3) The huge ecosystem provides a wealth of tools and libraries. 4) Active community and open source nature adapts them to new needs and technology trends.

PHP and Python are both high-level programming languages that are widely used in web development, data processing and automation tasks. 1.PHP is often used to build dynamic websites and content management systems, while Python is often used to build web frameworks and data science. 2.PHP uses echo to output content, Python uses print. 3. Both support object-oriented programming, but the syntax and keywords are different. 4. PHP supports weak type conversion, while Python is more stringent. 5. PHP performance optimization includes using OPcache and asynchronous programming, while Python uses cProfile and asynchronous programming.

PHP is mainly procedural programming, but also supports object-oriented programming (OOP); Python supports a variety of paradigms, including OOP, functional and procedural programming. PHP is suitable for web development, and Python is suitable for a variety of applications such as data analysis and machine learning.

PHP originated in 1994 and was developed by RasmusLerdorf. It was originally used to track website visitors and gradually evolved into a server-side scripting language and was widely used in web development. Python was developed by Guidovan Rossum in the late 1980s and was first released in 1991. It emphasizes code readability and simplicity, and is suitable for scientific computing, data analysis and other fields.

PHP is suitable for web development and rapid prototyping, and Python is suitable for data science and machine learning. 1.PHP is used for dynamic web development, with simple syntax and suitable for rapid development. 2. Python has concise syntax, is suitable for multiple fields, and has a strong library ecosystem.

PHP remains important in the modernization process because it supports a large number of websites and applications and adapts to development needs through frameworks. 1.PHP7 improves performance and introduces new features. 2. Modern frameworks such as Laravel, Symfony and CodeIgniter simplify development and improve code quality. 3. Performance optimization and best practices further improve application efficiency.

PHPhassignificantlyimpactedwebdevelopmentandextendsbeyondit.1)ItpowersmajorplatformslikeWordPressandexcelsindatabaseinteractions.2)PHP'sadaptabilityallowsittoscaleforlargeapplicationsusingframeworkslikeLaravel.3)Beyondweb,PHPisusedincommand-linescrip

PHP type prompts to improve code quality and readability. 1) Scalar type tips: Since PHP7.0, basic data types are allowed to be specified in function parameters, such as int, float, etc. 2) Return type prompt: Ensure the consistency of the function return value type. 3) Union type prompt: Since PHP8.0, multiple types are allowed to be specified in function parameters or return values. 4) Nullable type prompt: Allows to include null values and handle functions that may return null values.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Mac version
God-level code editing software (SublimeText3)

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Atom editor mac version download
The most popular open source editor