(K8s) Kubernetes lab 321 Perform a version upgrade on a Kubernetes cluster using Kubeadm

Agenda

In this lab we will upgrade kubernetes cluster to version 1.19.3

you can only upgrade from one minor version at a time in this case we will be upgrading from 1.18.4 to 1.19.3

Lets Practice

Step 0 : Create docker hub account https://hub.docker.com

Step 1 : Open Play with Kubernetes login with your docker hub account.

Step 2 : Click on start

Step 3 : It will start a 4 hr session, click on + ADD NEW INSTANCE

Step 4 : Click in terminal and run steps from lab 101 to build cluster

Step 5 : kubectl run web --image=nginx

  • create a pod with name web and image nginx

Step 6 : kubectl create deployment demo --image=nginx

  • create a deployment with name demo and image nginx

Step 7 : kubectl expose pod web --port 80 --name=nginx-svc --type=NodePort --target-port=80

  • create a service exposing the pod web using nodeport

Step 8 : kubectl get all

  • check and confirm all component have been created and in running state

Step 9 : kubectl get nodes

  • check version on nodes it should be v1.18.4

Step 10 : kubelet --version

  • check version should be v1.18.4

Step 11 : kubectl version

  • check version should be v1.18.4

Step 12 : kubeadm version

  • check version should be v1.18.4

Step 13 : yum list --showduplicates kubeadm --disableexcludes=kubernetes | grep 1.19

  • find 1.19.3

  • change below command with exact version if required

Step 14: yum install -y kubeadm-1.19.3-0 kubelet-1.19.3-0 kubectl-1.19.3-0 --disableexcludes=kubernetes

  • kubeadm, kubelet and kubectl are three critical component of kubernetes

  • upgrade all three of them.

Step 15 : kubeadm version

  • check version should be v1.19.3

tep 16 : kubelet --version

  • check version should be v1.19.3

Step 17 : kubectl version

  • check version should be v1.19.3

  • server version will still be 1.18.4

Step 18 : kubectl drain node1 --ignore-daemonsets

  • derain will remove all running pods on node1

  • and will cordon it, so no new pods will be scheduled on it

Step 19 : kubeadm upgrade plan

  • this command will check if cluster could be upgraded

  • It recommends upgrade to 1.19.6 you can ignore that for this lab

Step 20 : kubeadm upgrade apply v1.19.3

  • this will upgrade control plane components

  • it will prompt you to confirm upgrade, type y and press enter

Step 17 : kubectl version

  • check version both client and server should be v1.19.3

Step 21 : kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

  • upgrade CNI manually

Step 22 : kubectl uncordon node1

  • remove node from maintenance node

Step 23 : systemctl daemon-reload && systemctl restart kubelet

  • do a daemon-reload and restart kubelet

Step 24 : kubectl get nodes

  • check version of node1

  • if it is not 1.19.3, find kubelet process with ps-ef | grep kubelet and kill kubelet process with kill -9 <process id>

  • check again it will show 1.19.3 version

Step 25 : kubectl get all

  • confirm all objects are in running state

Step 26 : kubectl drain node2 --ignore-daemonsets

  • derain will remove all running pods on node1

  • and will cordon it, so no new pods will be scheduled on it

Step 27 : kubectl get nodes

make sure node2 status has SchedulingDisabled

Step 28 : yum install -y kubeadm-1.19.3-0 kubelet-1.19.3-0 --disableexcludes=kubernetes

  • run above command on node2

Step 29 : systemctl daemon-reload && systemctl restart kubelet

  • do a daemon-reload and restart kubelet

Step 30 : kubeadm version

  • check version should be v1.19.3

Step 31 : kubelet --version

  • check version should be v1.19.3

Step 32 : kubectl uncordon node2

  • remove node from maintenance node

Step 33 : kubectl get nodes

  • run command on master and check version of node2

  • if it is not 1.19.3

  • on node2 find kubelet process with ps-ef | grep kubelet and kill kubelet process with kill -9 <process id>

  • check again it will show 1.19.3 version

Step 34 : kubectl drain node2 --ignore-daemonsets

  • derain will remove all running pods on node1

  • and will cordon it, so no new pods will be scheduled on it

Step 35 : kubectl get nodes

make sure node2 status has SchedulingDisabled

Step 36 : yum install -y kubeadm-1.19.3-0 kubelet-1.19.3-0 --disableexcludes=kubernetes

  • run above command on node2

Step 37 : systemctl daemon-reload && systemctl restart kubelet

  • do a daemon-reload and restart kubelet

Step 38 : kubeadm version

  • check version should be v1.19.3

Step 39 : kubelet --version

  • check version should be v1.19.3

Step 40 : kubectl uncordon node2

  • remove node from maintenance node

Step 41 : kubectl get nodes

  • run command on master and check version of node2

  • if it is not 1.19.3

  • on node2 find kubelet process with ps-ef | grep kubelet and kill kubelet process with kill -9 <process id>

  • check again it will show 1.19.3 version