Home > Article > Web Front-end > CKA Full Course Day Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes
In this task, we’ll be exploring how to bypass the Kubernetes scheduler by directly assigning a pod to a specific node in a cluster. This can be a useful approach for specific scenarios where you need a pod to run on a particular node without going through the usual scheduling process.
We assume you have a Kubernetes cluster running, created with a KIND (Kubernetes in Docker) configuration similar to the one described in previous posts. Here, we’ve created a cluster named kind-cka-cluster:
kind create cluster --name kind-cka-cluster --config config.yml
Since we’ve already covered cluster creation with KIND in earlier posts, we won’t go into those details again.
To see the nodes available in this new cluster, run:
kubectl get nodes
You should see output similar to this:
NAME STATUS ROLES AGE VERSION kind-cka-cluster-control-plane Ready control-plane 7m v1.31.0
For this task, we’ll be scheduling our pod on kind-cka-cluster-control-plane.
Now, let’s create a pod manifest in YAML format. Using the nodeName field in our pod configuration, we can specify the exact node for the pod, bypassing the Kubernetes scheduler entirely.
node.yml:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx nodeName: kind-cka-cluster-control-plane
In this manifest:
This approach is a direct method for node selection, overriding other methods like nodeSelector or affinity rules.
According to Kubernetes documentation:
"nodeName is a more direct form of node selection than affinity or nodeSelector. nodeName is a field in the Pod spec. If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node. Using nodeName overrules using nodeSelector or affinity and anti-affinity rules."
For more details, refer to the Kubernetes documentation on node assignment.
With our manifest ready, apply it to the cluster:
kubectl apply -f node.yml
This command creates the nginx pod and assigns it directly to the kind-cka-cluster-control-plane node.
Finally, check that the pod is running on the specified node:
kubectl get pods -o wide
The output should confirm that the nginx pod is indeed running on kind-cka-cluster-control-plane:
kind create cluster --name kind-cka-cluster --config config.yml
This verifies that by setting the nodeName field, we successfully bypassed the Kubernetes scheduler and directly scheduled our pod on the control plane node.
To access the control plane node of our newly created cluster, use the following command:
kubectl get nodes
Navigate to the directory containing the static pod manifests:
NAME STATUS ROLES AGE VERSION kind-cka-cluster-control-plane Ready control-plane 7m v1.31.0
Verify the current manifests:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx nodeName: kind-cka-cluster-control-plane
To restart the kube-controller-manager, move its manifest file temporarily:
kubectl apply -f node.yml
After confirming the restart, return the manifest file to its original location:
kubectl get pods -o wide
With these steps, we successfully demonstrated how to access the control plane and manipulate the static pod manifests to manage the lifecycle of control plane components.
After temporarily moving the kube-controller-manager.yaml manifest file to /tmp, we can verify that the kube-controller-manager has restarted. As mentioned in previous posts, I am using k9s, which does clearly show the restart, but for readers without k9s, try the following command
Inspect Events:
To gather more information, use:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 28s 10.244.0.5 kind-cka-cluster-control-plane <none> <none>
Look for events at the end of the output. A successful restart will show events similar to:
docker exec -it kind-cka-cluster-control-plane bash
The presence of "Killing," "Created," and "Started" events indicates that the kube-controller-manager was stopped and then restarted successfully.
Once you have completed your tasks and confirmed the behavior of your pods, it is important to clean up any resources that are no longer needed. This helps maintain a tidy environment and frees up resources in your cluster.
List Pods:
First, you can check the current pods running in your cluster:
cd /etc/kubernetes/manifests
You might see output like this:
ls
Describe Pod:
To get more information about a specific pod, use the describe command:
mv kube-controller-manager.yaml /tmp
This will give you details about the pod, such as its name, namespace, node, and other configurations:
mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/
Delete the Pod:
If you find that the pod is no longer needed, you can safely delete it with the following command:
kubectl describe pod kube-controller-manager-kind-cka-cluster-control-plane -n kube-system
Verify Deletion:
After executing the delete command, you can verify that the pod has been removed by listing the pods again:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Killing 4m12s (x2 over 8m32s) kubelet Stopping container kube-controller-manager Normal Pulled 3m6s (x2 over 7m36s) kubelet Container image "registry.k8s.io/kube-controller-manager:v1.31.0" already present on machine Normal Created 3m6s (x2 over 7m36s) kubelet Created container kube-controller-manager Normal Started 3m6s (x2 over 7m36s) kubelet Started container kube-controller-manager
Ensure that the nginx pod no longer appears in the list.
By performing these cleanup steps, you help ensure that your Kubernetes cluster remains organized and efficient.
In this section, we will create three pods based on the nginx image, each with a unique name and specific labels indicating different environments: env:test, env:dev, and env:prod.
Step 1: Create the Script
First, we'll create a script that contains the commands to generate the pods. Use the following command to create the script file:
kind create cluster --name kind-cka-cluster --config config.yml
Next, paste the following code into the file:
kubectl get nodes
Step 2: Make the Script Executable
After saving the file, make the script executable with the following command:
NAME STATUS ROLES AGE VERSION kind-cka-cluster-control-plane Ready control-plane 7m v1.31.0
Step 3: Execute the Script
Run the script to create the pods:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx nodeName: kind-cka-cluster-control-plane
You should see output indicating the creation of the pods:
kubectl apply -f node.yml
Step 4: Verify the Created Pods
The script will then display the status of the created pods:
kubectl get pods -o wide
At this point, you can filter the pods based on their labels. For example, to find the pod with the env=dev label, use the following command:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 28s 10.244.0.5 kind-cka-cluster-control-plane <none> <none>
You should see output confirming the pod is running:
docker exec -it kind-cka-cluster-control-plane bash
The above is the detailed content of CKA Full Course Day Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes. For more information, please follow other related articles on the PHP Chinese website!