(K8s) Kubernetes lab 322 Implement etcd backup and restore
etcd is a key value store used by kubernetes to store all data
It provides consistency and high availability
You should plan to backup etcd regularly
More information on etcd can be found here
Agenda of this lab is to practice taking backup and restoring etcd
IP address of master should be same otherwise restore get complicated due to certificate issues
Lets Practice
Step 0 : Create docker hub account https://hub.docker.com
Step 1 : Open Play with Kubernetes login with your docker hub account.
Step 2 : Click on start
Step 3 : It will start a 4 hr session, click on + ADD NEW INSTANCE
Step 4 : Click in terminal and run steps from lab 101 to build cluster
Please check k8slab 101 https://www.shrlrn.com/practice/k8slab-101
create three instance
on first instance enter below command
kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16
capture output of kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
enter captured command in second and third node
enter below command on first node
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
you may also use kubeadm token list to find token
use this command on second and third node kubeadm join <IP address of master/first node>:6443 --token <token from above command> --discovery-token-unsafe-skip-ca-verification
Step 5 : kubectl run web --image=nginx
create a pod with name web and image nginx
Step 6 : kubectl create deployment demo --image=nginx
create a deployment with name demo and image nginx
Step 7 : kubectl expose pod web --port 80 --name=nginx-svc --type=NodePort --target-port=80
create a service exposing the pod web using nodeport
Step 8 : kubectl get all
check and confirm all component have been created and in running state
Step 9 : kubectl get pods -n kube-system
check name of etcd pod
Step 10: kubectl cp -n kube-system <name of etcd pod>:usr/local/bin/etcdctl etcdctl
copy etcdctl from etcd pod to local machine
use name of the pod you got from above output
Step 11 : ls -la
confirm that etcdctl is copied
Step 12 : chmod +x etcdctl
add executable permission on etcdctl
Step 13 : ls -la
confirm changes you made in above command
Step 14 : cp etcdctl /usr/bin/
copy etcdctl to any directory in path
Step 15 : etcdctl version
check etcdctl version
Step 16 : cat /etc/kubernetes/manifests/etcd.yaml
/etc/kubernetes/manifests is location where kubernetes save control plane/static pods yaml file
Information from this file is required to get etcd configuration
take note of
--advertise-client-urls
--cert-file
--peer-cert-file
--peer-key-file
--data-dir
--initial-advertise-peer-urls
--initial-cluster
Step 17 : ETCDCTL_API=3 etcdctl --endpoints=https://<use --advertise-client-urls>:2379 --cacert=<use cert-file from above command > --cert=<use peer-cert-file from above command> --key=<use peer-key-file from above command> version
First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version
--etcdctl for taking backup require endpoints information to connect and backup (these could be more than one in case of HA)
--cacert option is required for taking backup to verify certificates of TLS-enabled secure servers using this CA bundle
--cert option is required for taking backup to identify secure client using this TLS certificate file
--key option is required for taking backup to identify secure client using this TLS key file
Step 18 : ETCDCTL_API=3 etcdctl --endpoints=https://<use --advertise-client-urls>:2379 --cacert=<use cert-file from above command > --cert=<use peer-cert-file from above command> --key=<use peer-key-file from above command> snapshot save <location of file where you want to save backup>
First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version
--etcdctl for taking backup require endpoints information to connect and backup (these could be more than one in case of HA)
--cacert option is required for taking backup to verify certificates of TLS-enabled secure servers using this CA bundle
--cert option is required for taking backup to identify secure client using this TLS certificate file
--key option is required for taking backup to identify secure client using this TLS key file
Step 19 : etcdctl snapshot status etcdbackup
check status of backup file
you can also use --write-out=table to get output in table format
Step 20: kubectl delete deployment demo
delete deployment demo
Step 21 : kubectl delete pod web
delete pod web
Step 23 : kubectl delete service/nginx-svc
delete service we created
Step 24 : kubectl get all
check and confirm all objects have been deleted
Step 25 : systemctl stop kubelet
stop kubelet service
Step 26 : systemctl stop docker
stop docker service
Step 27 : ls -la /var/lib/
check and confirm etcd directory in /var/lib
this is the data directory used by etcd, could be different in your cade
check step 16
Step 28 : ETCDCTL_API=3 etcdctl --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://<use initial-advertise-peer-urls from step 16>:2380 --initial-cluster=default=https://<use initial-cluster from step 16>:2380 snapshot restore <use file where you save backup in step 16>
First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version
--data-dir is required to create directory and restore etcd
--initial-advertise-peer-urls is required
--initial-cluster is required
this information could be obtained from step 16
Step 29 : systemctl start docker
start docker service
Step 30 : systemctl start kubelet
start kubelet service
Step 31 : kubectl get nodes
check status of nodes
Step 30 : kubectl get all
check and confirm that pod web, deployment demo and a nodeport service we deleted earlier has been created.
anything which has been created after taking backup will not be restored