博客列表 >最新的 k8s v1.23.5安装

最新的 k8s v1.23.5安装

哈
原创
2022年04月29日 15:22:541308浏览

镜像下载、域名解析、时间同步请点击 阿里云开源镜像站

一、在两台机器上安装docker

  1. // 1.安装Docker源
  2. yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  3. // 2.安装Docker
  4. yum -y install docker-ce-18.06.1.ce-3.el7
  5. // 3.开启自启和启动
  6. systemctl enable docker && systemctl start docker
  7. // 4.查看版本
  8. docker --version

二、安装最新的,k8s

  1. // 查找最新版本
  2. [root@master ~]# curl -sSL https://dl.k8s.io/release/stable.txt
  3. v1.23.5
  4. // 下载安装
  5. [root@master tmp]# wget -q https://dl.k8s.io/v1.23.5/kubernetes-server-linux-amd64.tar.gz
  6. [root@master tmp]# tar -zxf kubernetes-server-linux-amd64.tar.gz
  7. [root@master tmp]# ls kubernetes
  8. addons kubernetes-src.tar.gz LICENSES server
  9. [root@master tmp]# ls kubernetes/server/bin/ | grep -E 'kubeadm|kubelet|kubectl'
  10. kubeadm
  11. kubectl
  12. kubelet
  13. // 可以看到在 server/bin/ 目录下有我们所需要的全部内容,将我们所需要的 kubeadm kubectl kubelet 等都移动至 /usr/bin 目录下。
  14. [root@master tmp]# mv kubernetes/server/bin/kube{adm,ctl,let} /usr/bin/
  15. [root@master tmp]# ls /usr/bin/kube*
  16. /usr/bin/kubeadm /usr/bin/kubectl /usr/bin/kubelet
  17. [root@master tmp]# kubeadm version
  18. [root@master tmp]# kubectl version --client
  19. [root@master tmp]# kubelet --version
  20. //为了在生产环境中保障各组件的稳定运行,同时也为了便于管理,我们增加对 kubelet 的 systemd 的配置,由 systemd 对服务进行管理。
  21. [root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service
  22. [Unit]
  23. Description=kubelet: The Kubernetes Agent
  24. Documentation=http://kubernetes.io/docs/
  25. [Service]
  26. ExecStart=/usr/bin/kubelet
  27. Restart=always
  28. StartLimitInterval=0
  29. RestartSec=10
  30. [Install]
  31. WantedBy=multi-user.target
  32. EOF
  33. [root@master tmp]# mkdir -p /etc/systemd/system/kubelet.service.d
  34. [root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service.d/kubeadm.conf
  35. [Service]
  36. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
  37. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
  38. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
  39. EnvironmentFile=-/etc/default/kubelet
  40. ExecStart=
  41. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
  42. EOF
  43. // 设置开机自启
  44. [root@master tmp]# systemctl enable kubelet
  45. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
  46. // 此时,我们的前期准备已经基本完成,可以使用 kubeadm 来创建集群了。别着急,在此之前,我们还需要安装两个工具,名为crictl 和 socat。
  47. // Kubernetes v1.23.5 对应 crictl-v1.23.0
  48. [root@master ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz
  49. [root@master ~]# tar zxvf crictl-v1.23.0-linux-amd64.tar.gz
  50. [root@master ~]# mv crictl /usr/bin/
  51. sudo yum install -y socat
  52. // 启动 master
  53. [root@master ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
  54. [init] Using Kubernetes version: v1.23.5
  55. [preflight] Running pre-flight checks
  56. error execution phase preflight: [preflight] Some fatal errors occurred:
  57. [ERROR FileExisting-conntrack]: conntrack not found in system path
  58. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  59. To see the stack trace of this error execute with --v=5 or higher
  60. // 报错了 需要安装conntrack-tools
  61. yum -y install socat conntrack-tools
  62. // 又报错了
  63. [kubelet-check] Initial timeout of 40s passed.
  64. [kubelet-check] It seems like the kubelet isn't running or healthy.
  65. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
  66. // Docker是用yum安装的,docker的cgroup驱动程序默认设置为systemd。默认情况下Kubernetes cgroup为system,我们需要更改Docker cgroup驱动,
  67. # 添加以下内容
  68. vim /etc/docker/daemon.json
  69. {
  70. "exec-opts": ["native.cgroupdriver=systemd"]
  71. }
  72. # 重启docker
  73. systemctl restart docker
  74. # 重新初始化 kubeadm
  75. kubeadm reset # 先重置
  76. kubeadm init \
  77. --apiserver-advertise-address=192.168.42.122 \
  78. --image-repository registry.aliyuncs.com/google_containers \
  79. --kubernetes-version v1.22.2 \
  80. --service-cidr=10.96.0.0/12 \
  81. --pod-network-cidr=10.244.0.0/16 \
  82. --ignore-preflight-errors=all
  83. kubeadm reset
  84. // 可以简单初始化
  85. kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
  86. Your Kubernetes control-plane has initialized successfully!
  87. /var/lib/kubelet/config.yaml #kubeadm配置文件
  88. /etc/kubernetes/pki #证书存放目录
  89. [root@master ~]# kubeadm config images list --kubernetes-version v1.23.5
  90. k8s.gcr.io/kube-apiserver:v1.23.5
  91. k8s.gcr.io/kube-controller-manager:v1.23.5
  92. k8s.gcr.io/kube-scheduler:v1.23.5
  93. k8s.gcr.io/kube-proxy:v1.23.5
  94. k8s.gcr.io/pause:3.6
  95. k8s.gcr.io/etcd:3.5.1-0
  96. k8s.gcr.io/coredns/coredns:v1.8.6
  97. [root@master ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.5
  98. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
  99. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
  100. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
  101. [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5
  102. [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
  103. [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
  104. [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
  105. // 配置 环境变量 ,每次重启,kubeadm 都要配置,这个待研究
  106. mkdir -p $HOME/.kube
  107. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  108. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  109. Alternatively, if you are the root user, you can run:
  110. export KUBECONFIG=/etc/kubernetes/admin.conf
  111. You should now deploy a pod network to the cluster.
  112. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  113. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  114. Then you can join any number of worker nodes by running the following on each as root:
  115. // 安装 通信组件 flannel 或者 calico
  116. mkdir ~/kubernetes-flannel && cd ~/kubernetes-flannel
  117. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  118. kubectl apply -f kube-flannel.yml
  119. kubectl get nodes
  120. [root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  121. Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
  122. podsecuritypolicy.policy/psp.flannel.unprivileged created
  123. clusterrole.rbac.authorization.k8s.io/flannel created
  124. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  125. serviceaccount/flannel created
  126. configmap/kube-flannel-cfg created
  127. daemonset.apps/kube-flannel-ds created
  128. [root@master ~]# kubectl get pod -n kube-system
  129. NAME READY STATUS RESTARTS AGE
  130. coredns-6d8c4cb4d-7jfb8 0/1 Pending 0 11m
  131. coredns-6d8c4cb4d-m8hfd 0/1 Pending 0 11m
  132. etcd-master 1/1 Running 4 11m
  133. kube-apiserver-master 1/1 Running 3 11m
  134. kube-controller-manager-master 1/1 Running 4 11m
  135. kube-flannel-ds-m65q6 1/1 Running 0 17s
  136. kube-proxy-qlrmp 1/1 Running 0 11m
  137. kube-scheduler-master 1/1 Running 4 11m
  138. // coredns 一直是 Pending没有找到原因
  139. // 于是乎决定换成 calico试试
  140. 先删除 kube-flannel
  141. [root@master ~]# kubectl delete -f kube-flannel.yml
  142. Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
  143. podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
  144. clusterrole.rbac.authorization.k8s.io "flannel" deleted
  145. clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
  146. serviceaccount "flannel" deleted
  147. configmap "kube-flannel-cfg" deleted
  148. daemonset.apps "kube-flannel-ds" deleted
  149. [root@master ~]# ifconfig cni0 down
  150. cni0: ERROR while getting interface flags: No such device
  151. [root@master ~]# ip link delete cni0
  152. Cannot find device "cni0"
  153. [root@master ~]# rm -rf /var/lib/cni/
  154. [root@master ~]# ifconfig flannel.1 down
  155. [root@master ~]# ip link delete flannel.1
  156. [root@master ~]# rm -f /etc/cni/net.d/*
  157. [root@master ~]# restart kubelet
  158. -bash: restart: command not found
  159. [root@master ~]# systemctl restart kubelet
  160. // 安装 calico
  161. [root@master ~]# curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
  162. % Total % Received % Xferd Average Speed Time Time Time Current
  163. Dload Upload Total Spent Left Speed
  164. 100 212k 100 212k 0 0 68018 0 0:00:03 0:00:03 --:--:-- 68039
  165. [root@master ~]# ls
  166. calico.yaml kube-flannel.yml kubernetes-flannel
  167. [root@master ~]# kubectl get nodes
  168. NAME STATUS ROLES AGE VERSION
  169. master NotReady control-plane,master 16h v1.23.5
  170. node1 NotReady <none> 12h v1.23.5
  171. [root@master ~]# kubectl apply -f calico.yaml
  172. configmap/calico-config created
  173. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  174. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  175. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  176. customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
  177. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  178. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  179. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  180. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  181. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  182. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  183. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  184. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  185. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  186. customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
  187. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  188. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  189. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  190. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  191. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  192. clusterrole.rbac.authorization.k8s.io/calico-node created
  193. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  194. daemonset.apps/calico-node created
  195. serviceaccount/calico-node created
  196. deployment.apps/calico-kube-controllers created
  197. serviceaccount/calico-kube-controllers created
  198. Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
  199. poddisruptionbudget.policy/calico-kube-controllers created
  200. // 查询 pod
  201. [root@master ~]# kubectl get -w pod -A
  202. NAMESPACE NAME READY STATUS RESTARTS AGE
  203. kube-system calico-kube-controllers-56fcbf9d6b-28w9g 1/1 Running 0 21m
  204. kube-system calico-node-btgnl 1/1 Running 0 21m
  205. kube-system calico-node-z64mb 1/1 Running 0 21m
  206. kube-system coredns-6d8c4cb4d-8pnxx 1/1 Running 0 12h
  207. kube-system coredns-6d8c4cb4d-jdbj2 1/1 Running 0 12h
  208. kube-system etcd-master 1/1 Running 4 17h
  209. kube-system kube-apiserver-master 1/1 Running 3 17h
  210. kube-system kube-controller-manager-master 1/1 Running 4 17h
  211. kube-system kube-proxy-68qrn 1/1 Running 0 12h
  212. kube-system kube-proxy-qlrmp 1/1 Running 0 17h
  213. kube-system kube-scheduler-master 1/1 Running 4 17h
  214. 运行正常了

原文链接:https://blog.csdn.net/qq_36002737/article/details/123678418

声明:本文内容转载自脚本之家,由网友自发贡献,版权归原作者所有,如您发现涉嫌抄袭侵权,请联系admin@php.cn 核实处理。
全部评论
文明上网理性发言,请遵守新闻评论服务协议