Home > Article > Operation and Maintenance > How to use Kubernetes container orchestration in Linux systems
With the rise of cloud-native applications, Kubernetes has become the de facto standard for container orchestration. Since Kubernetes is open source and can run on various Linux distributions, it is very common to use Kubernetes container orchestration in Linux systems. This article will introduce how to install and configure Kubernetes in a Linux system, and how to use Kubernetes for container orchestration.
Installing Kubernetes in a Linux system usually requires the following steps:
1.1 Install Docker
Due to Kubernetes Use Docker as the container runtime, so you need to install Docker first. In Ubuntu system, you can use the following command to install Docker:
sudo apt-get update sudo apt-get install docker.io
In CentOS system, you can use the following command to install Docker:
sudo yum install docker sudo systemctl start docker sudo systemctl enable docker
1.2 Install Kubernetes
In Ubuntu system In CentOS systems, you can use the following command to install Kubernetes:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl
In CentOS systems, you can use the following command to install Kubernetes:
sudo yum install -y epel-release sudo yum update -y sudo yum install -y kubelet kubeadm kubectl sudo systemctl enable kubelet && sudo systemctl start kubelet
2.1 Initialize the Master node
In the Kubernetes cluster, the Master node is responsible for managing the entire cluster. To initialize the Master node, you need to run the following command:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This command will install the necessary components and generate a command for joining the node. The last few lines of the execution command output will contain the command for joining the node, such as:
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
Please note that the output of this command should be unique and only applicable to the Master node.
2.2 Join Worker Node
To add Worker node to Kubernetes cluster, you need to run the join node command output in the previous step. For example:
sudo kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
Running this command will install the necessary components and add the Worker node to the cluster.
2.3 Install the network plug-in
Kubernetes requires a network plug-in to provide a network for Pods. Commonly used network plug-ins include Flannel and Calico. Here, we choose to install Flannel. To install Flannel, you can run the following command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now, we have successfully installed and configured Kubernetes on our Linux system. Next, we will introduce how to use Kubernetes for container orchestration.
3.1 Create Deployment
In Kubernetes, Deployment is an abstraction for creating and managing Pods. To create a Deployment, use the kubectl command. For example, to create a Deployment named nginx, you can run the following command:
kubectl create deployment nginx --image=nginx
This command will deploy a container named nginx, using the nginx image on Docker Hub.
3.2 Edit Deployment
To modify a Deployment, you can use the kubectl edit deployment command. For example, to modify the number of replicas of nginx Deployment to 3, you can run the following command:
kubectl edit deployment nginx
This will open an editor where you can modify the yaml file. Change the value of the replicas field to 3, then save and exit the editor.
3.3 Exposing Service
In Kubernetes, Service is an abstraction used to expose the network endpoint of a Pod. To expose the Deployment's Service, you can use the kubectl expose command. For example, to expose the Service of nginx Deployment, you can run the following command:
kubectl expose deployment nginx --port=80 --type=NodePort
This command will create a Service named nginx and expose it to port 80 of all nodes in the cluster.
3.4 Expanding Deployment
To expand Deployment, you can use the kubectl scale command. For example, to expand the number of replicas of an nginx Deployment to 5, you can run the following command:
kubectl scale deployment nginx --replicas=5
3.5 Management Status
To view the status of the Deployment, use the kubectl command. For example, to view all Deployments and their status, you can run the following command:
kubectl get deployments
This command will output the name, required quantity, available quantity, and status of all Deployments in the cluster.
Summary
Through this article, we have learned how to install and configure Kubernetes in a Linux system and use Kubernetes for container orchestration. These skills are must-haves for any developer and system administrator who wants to enter the cloud-native world.
The above is the detailed content of How to use Kubernetes container orchestration in Linux systems. For more information, please follow other related articles on the PHP Chinese website!