Solution to the error when starting docker: 1. Open docker and add the content "OPTIONS="--selinux-enabled...""; 2. Clear the filter table of iptables; 3. Execute "docker -storage-setup" command and so on.
The operating environment of this article: CentOS 7.2 system, Docker version 18.04.0, Dell G3 computer.
How to solve the error when starting docker?
Summary of error reports when Docker starts up
Eight common Docker faults
Error report one: error initializing graphdriver
Docker start-up error reports
The system is CentOS 7.2
The system kernel and docker version are as follows:
[root@docker ~]# uname -r 3.10.0-327.el7.x86_64 [root@docker ~]# [root@docker ~]# [root@docker ~]# [root@docker ~]# docker version Client: Version: 18.04.0-ce API version: 1.37 Go version: go1.9.4 Git commit: 3d479c0 Built: Tue Apr 10 18:21:36 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 18.04.0-ce API version: 1.37 (minimum version 1.12) Go version: go1.9.4 Git commit: 3d479c0 Built: Tue Apr 10 18:25:25 2018 OS/Arch: linux/amd64 Experimental: false
The startup error message is as follows:
[root@docker ~]# systemctl start docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journ [root@docker ~]# [root@docker ~]# [root@docker ~]# [root@docker ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since 日 2018-04-22 20:52:39 CST; 5s ago Docs: https://docs.docker.com Process: 4810 ExecStart=/usr/bin/dockerd (code=exited, status=1/FAILURE) Main PID: 4810 (code=exited, status=1/FAILURE) 4月 22 20:52:39 docker.cgy.com systemd[1]: Failed to start Docker Application Container Engine. 4月 22 20:52:39 docker.cgy.com systemd[1]: Unit docker.service entered failed state. 4月 22 20:52:39 docker.cgy.com systemd[1]: docker.service failed. 4月 22 20:52:39 docker.cgy.com systemd[1]: docker.service holdoff time over, scheduling restart. 4月 22 20:52:39 docker.cgy.com systemd[1]: start request repeated too quickly for docker.service 4月 22 20:52:39 docker.cgy.com systemd[1]: Failed to start Docker Application Container Engine. 4月 22 20:52:39 docker.cgy.com systemd[1]: Unit docker.service entered failed state. 4月 22 20:52:39 docker.cgy.com systemd[1]: docker.service failed.
The specific cause of the error is not seen from the above error message. . Then I used dockerd
to start it directly, and I saw an error message at the bottom of the output information, as follows:
[root@docker ~]# dockerd INFO[2018-04-22T21:12:46.111704443+08:00] libcontainerd: started new docker-containerd process pid=5903 INFO[0000] starting containerd module=containerd revision=773c489c9c1b21a6d78b5c538cd395416ec50f88 version=v1.0.3 。。。。。。省略一部分输出。。。。。。 INFO[0000] loading plugin "io.containerd.grpc.v1.introspection"... module=containerd type=io.containerd.grpc.v1 INFO[0000] serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug" INFO[0000] serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc" INFO[0000] containerd successfully booted in 0.002763s module=containerd Error starting daemon: error initializing graphdriver: overlay: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to en d_type support. Backing filesystems without d_type support are not supported.
According to the last error Error starting daemon:
Searched this blog and got the solution.
https://blog.csdn.net/liu9718214/article/details/79134900
The specific solution is:
##vim /etc/sysconfig/docker
Add the following content:
OPTIONS="--selinux-enabled --log-driver=journald --signature-verification=false"
/etc/docker/daemon.jsonAdd the following content:
{ "registry-mirrors": ["http://4a1df5ef.m.daocloud.io"], # 是用来pull容器加速用的,跟此次问题无关。 "storage-driver": "devicemapper" # 解决此次问题 }Then restart docker and solve the problem smoothly:
[root@docker ~]# systemctl restart docker [root@docker ~]# [root@docker ~]# [root@docker ~]# ps aux | grep docker root 5922 1.7 1.6 528432 62568 ? Ssl 21:15 0:00 /usr/bin/dockerd root 5927 1.1 0.5 356984 22100 ? Ssl 21:15 0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml root 6028 0.0 0.0 112664 964 pts/0 S+ 21:15 0:00 grep --color=auto dockerError 2: iptables failedFirewallDfirewalld is introduced in CentOS-7. The bottom layer of firewall is to use iptables for data filtering and is built on iptables. This may cause Conflict with Docker. When firewalld starts or restarts, the DOCKER rules will be removed from iptables, thus affecting the normal work of Docker. When you are using Systemd, firewalld will start before Docker, but if you start or restart firewalld after Docker starts, you need to restart the Docker process. System:
[root@controller ~]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)
The error message is as follows: [root@controller ~]# docker run -it -P docker.io/nginx
/usr/bin/docker-current: Error response from daemon: driver failed programming external connectivity on endpoint gloomy_kirch (10289e7a87e65771da90cda531951b7339bee9cb5953474460451cd48013aff0): iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32810 -j DNAT --to-destination 172.17.0.2:80 ! -i docker0: iptables: No chain/target/match by that name.
(exit status 1).
This is because the container was successfully started once before running this time. During the last visit, due to the firewall The problem prevented normal access to Nginx, so I cleared the filter table of iptables, restarted iptables, and then ran it again, the above error was reported. SolutionRestart the firewall
#CentOS 7下执行 [root@controller ~]# systemctl restart firewalldRestart the docker daemon process
[root@controller ~]# systemctl restart dockerRun an nginx in the container again and there will be no error
[root@controller ~]# docker run -it --name nginx -p 80:80 -v /www:/wwwroot docker.io/nginx /bin/bash
root@a8a92c8f7760:/#
Error three: Unable to take ownership of thin-poolDocker daemon failed to start: Unable to take ownership of thin-pool
Apr 27 13:51:59 master systemd: Started Docker Storage Setup. Apr 27 13:51:59 master systemd: Starting Docker Application Container Engine... Apr 27 13:51:59 master dockerd-current: time="2018-04-27T13:51:59.088441356+08:00" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found" Apr 27 13:51:59 master dockerd-current: time="2018-04-27T13:51:59.091166189+08:00" level=info msg="libcontainerd: new containerd process, pid: 20930" Apr 27 13:52:00 master dockerd-current: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (docker--vg-docker--pool) that already has used data blocks Apr 27 13:52:00 master systemd: docker.service: main process exited, code=exited, status=1/FAILURE Apr 27 13:52:00 master systemd: Failed to start Docker Application Container Engine. Apr 27 13:52:00 master systemd: Unit docker.service entered failed state. Apr 27 13:52:00 master systemd: docker.service failedReason: /var/lib/ Metadata in docker/devicemapper/metadata/ is lostworkaround:https://bugzilla.redhat.com/show_bug.cgi?id=1321640#c5Eric Paris 2016-04-27 08:20:10 EDT
I feel like the kcs kinda misses telling users the actual problem. Nor does it really make it clear the solution. IF you are using device mapper (instead of loopback) /var/lib/docker contains metadata informing docker about the contents of the device mapper storage area. If you delete /var/lib/docker that metadata is lost. Docker is then able to detect that the thin pool has data but docker is unable to make use of that information. The only solution is to delete the thin pool and recreate it so that both the thin pool and the metadata in /var/lib/docker will be empty.Solution:
- Execute command:
rm -rf /var/lib/docker/*
- Execute the command:
rm -rf /etc/sysconfig/docker-storage
- Execute the command:
lvremove /dev/docker-vg/docker-pool
- Use existing docker-vg LVM volume group:
cat /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
- Execute the command:
docker-storage-setup
##Restart docker: - systemctl start docker
The following error is reported when docker run runs the container:
[root@backup-system cpu]# docker run -ti --name hkp_ubuntu --cpuset-cpus=0-3 ubuntu bash docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: failed to write "0-3\n" to "/sys/fs/cgroup/cpuset/docker/cpuset.cpus": write /sys/fs/cgroup/cpuset/docker/cpuset.cpus: invalid argument: unknown.This error is because the cpu of this cgroup is being used by other cgroups, so exclusive cannot be set.
Therefore, you need to check and adjust the cpuset.cpus of each cgroup first to ensure that the cpu used by the current cgroup is indeed only allocated to it. Then you can set cpu_exclusive at this time./sys/fs/cgroup/cpuset/The current specific reason is that during the experiment, a new container directory was created in
, and container/cpuset.cpus was set to 0-3<pre class="brush:php;toolbar:false">[root@backup-system docker]# cat /sys/fs/cgroup/cpuset/container/cpuset.cpus
0-3</pre>
:After setting /sys/fs/cgroup/cpuset/container/cpuset.cpus
to empty, the above problem is solved. Recommended learning: "
The above is the detailed content of How to solve the error when starting docker. For more information, please follow other related articles on the PHP Chinese website!

How to build portable applications with Docker and Linux? First, use Dockerfile to containerize the application, and then manage and deploy the container in a Linux environment. 1) Write a Dockerfile and package the application and its dependencies into a mirror. 2) Build and run containers on Linux using dockerbuild and dockerrun commands. 3) Manage multi-container applications through DockerCompose and define service dependencies. 4) Optimize the image size and resource configuration, enhance security, and improve application performance and portability.

Docker and Kubernetes improve application deployment and management efficiency through container orchestration. 1.Docker builds images through Dockerfile and runs containers to ensure application consistency. 2. Kubernetes manages containers through Pod, Deployment and Service to achieve automated deployment and expansion.

Docker and Kubernetes are leaders in containerization and orchestration. Docker focuses on container lifecycle management and is suitable for small projects; Kubernetes is good at container orchestration and is suitable for large-scale production environments. The combination of the two can improve development and deployment efficiency.

Docker and Linux are perfect matches because they can simplify the development and deployment of applications. 1) Docker uses Linux's namespaces and cgroups to implement container isolation and resource management. 2) Docker containers are more efficient than virtual machines, have faster startup speeds, and the mirrored hierarchical structure is easy to build and distribute. 3) On Linux, the installation and use of Docker is very simple, with only a few commands. 4) Through DockerCompose, you can easily manage and deploy multi-container applications.

The difference between Docker and Kubernetes is that Docker is a containerized platform suitable for small projects and development environments; Kubernetes is a container orchestration system suitable for large projects and production environments. 1.Docker simplifies application deployment and is suitable for small projects with limited resources. 2. Kubernetes provides automation and scalability capabilities, suitable for large projects that require efficient management.

Use Docker and Kubernetes to build scalable applications. 1) Create container images using Dockerfile, 2) Deployment and Service of Kubernetes through kubectl command, 3) Use HorizontalPodAutoscaler to achieve automatic scaling, thereby building an efficient and scalable application architecture.

The main difference between Docker and Kubernetes is that Docker is used for containerization, while Kubernetes is used for container orchestration. 1.Docker provides a consistent environment to develop, test and deploy applications, and implement isolation and resource limitation through containers. 2. Kubernetes manages containerized applications, provides automated deployment, expansion and management functions, and supports load balancing and automatic scaling. The combination of the two can improve application deployment and management efficiency.

Installing and configuring Docker on Linux requires ensuring that the system is 64-bit and kernel version 3.10 and above, use the command "sudoapt-getupdate" and install it with the command "sudoapt-getupdate" and verify it with "sudoapt-getupdate" and. Docker uses the namespace and control groups of the Linux kernel to achieve container isolation and resource limitation. The image is a read-only template, and the container can be modified. Examples of usage include running an Nginx server and creating images with custom Dockerfiles. common


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

Atom editor mac version download
The most popular open source editor

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SublimeText3 Linux new version
SublimeText3 Linux latest version

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment
