kubernetes集群计算节点的升级和扩容

网友投稿 958 2022-10-14

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

kubernetes集群计算节点的升级和扩容

kubernetes集群计算节点的升级和扩容

kuernetes集群计算节点升级

首先查看集群的节点状态  Last login: Thu Mar 14 09:39:26 2019 from 10.83.2.89[root@kubemaster ~]# [root@kubemaster ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONkubemaster Ready master 17d v1.13.3kubenode1 Ready 17d v1.13.3kubenode2 Ready 17d v1.13.3[root@kubemaster ~]#查看哪些POD运行在kubenode1节点上面  [root@kubemaster ~]# kubectl get pods -o wide|grep kubenode1account-summary-689d96d949-49bjr 1/1 Running 0 7d15h 10.244.1.17 kubenode1 compute-interest-api-5f54cc8dd9-44g9p 1/1 Running 0 7d15h 10.244.1.15 kubenode1 send-notification-fc7c8ffc4-rk5wl 1/1 Running 0 7d15h 10.244.1.16 kubenode1 transaction-generator-7cfccbbd57-8ts5s 1/1 Running 0 7d15h 10.244.1.18 kubenode1 [root@kubemaster ~]# # 如果别的命名空间也有pods,也可以加上命名空间,比如 kubectl get pods -n kube-system -o wide|grep kubenode1使用kubectl cordon命令将kubenode1节点配置为不可调度状态;  [root@kubemaster ~]# kubectl cordon kubenode1node/kubenode1 cordoned[root@kubemaster ~]#继续查看运行的Pod,发现Pod还是运行在kubenode1上面。其实kubectl crodon的用途只是说后续的pod不运行在kubenode1上面,但是仍然在kubenode1节点上面运行的Pod还是没有驱逐  [root@kubemaster ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONkubemaster Ready master 17d v1.13.3kubenode1 Ready,SchedulingDisabled 17d v1.13.3kubenode2 Ready 17d v1.13.3[root@kubemaster ~]# kubectl get pods -n kube-system -o wide|grep kubenode1kube-flannel-ds-amd64-7ghpg 1/1 Running 1 17d 10.83.32.138 kubenode1 kube-proxy-2lfnm 1/1 Running 1 17d 10.83.32.138 kubenode1 [root@kubemaster ~]#现在需要驱逐Pod,使用的命令是kubectl drain  如果节点上面还有一些DaemonSet的Pod在运行的话,需要加上参数 —ignore-daemonsets  [root@kubemaster ~]# kubectl drain kubenode1 --ignore-daemonsetsnode/kubenode1 already cordonedWARNING: Ignoring DaemonSet-managed pods: node-exporter-s5vfc, kube-flannel-ds-amd64-7ghpg, kube-proxy-2lfnmpod/traefik-ingress-controller-7899bfbd87-wsl64 evictedpod/grafana-57f7d594d9-vw5mp evictedpod/tomcat-deploy-5fd9ffbdc7-cdnj8 evictedpod/myapp-deploy-6b56d98b6b-rrb5b evictedpod/transaction-generator-7cfccbbd57-8ts5s evictedpod/prometheus-848d44c7bc-rtq7t evictedpod/send-notification-fc7c8ffc4-rk5wl evictedpod/compute-interest-api-5f54cc8dd9-44g9p evictedpod/account-summary-689d96d949-49bjr evictednode/kubenode1 evicted[root@kubemaster ~]#再次查看Pod,是否还有Pod在kubenode1上面运行。没有的话开始关机升级配置,增加配置之后启动计算节点。[root@kubemaster ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONkubemaster Ready master 17d v1.13.3kubenode1 Ready,SchedulingDisabled 17d v1.13.3kubenode2 Ready 17d v1.13.3[root@kubemaster ~]##发现这个节点还是无法调度的状态[root@kubemaster ~]# kubectl uncordon kubenode1#设置这个计算节点为可调度node/kubenode1 uncordoned[root@kubemaster ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONkubemaster Ready master 17d v1.13.3kubenode1 Ready 17d v1.13.3kubenode2 Ready 17d v1.13.3[root@kubemaster ~]#至此升级一台k8s集群计算节点的任务就此完成了。现在我们再来实现k8s集群增加一台计算节点;

kuernetes集群计算节点扩容

首先参考我以前的一篇关于通过kubeadm安装k8s集群的博客:  https://blog.51cto.com/zgui2000/2354852  设置好yum源仓库,安装好docker-ce、安装好kubelet等;

[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/docker-ce.repo [docker-ce-stable]name=Docker CE Stable - $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stableenabled=1gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-stable-debuginfo]name=Docker CE Stable - Debuginfo $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stableenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-stable-source]name=Docker CE Stable - Sourcesbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stableenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-edge]name=Docker CE Edge - $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edgeenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-edge-debuginfo]name=Docker CE Edge - Debuginfo $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edgeenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-edge-source]name=Docker CE Edge - Sourcesbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edgeenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-test]name=Docker CE Test - $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/testenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-test-debuginfo]name=Docker CE Test - Debuginfo $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/testenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-test-source]name=Docker CE Test - Sourcesbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/testenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-nightly]name=Docker CE Nightly - $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightlyenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-nightly-debuginfo]name=Docker CE Nightly - Debuginfo $basearchbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightlyenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-nightly-source]name=Docker CE Nightly - Sourcesbaseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightlyenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[root@kubenode3 yum.repos.d]# #准备docker-ce yum仓库文件[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes]name=Kubernetes Repobaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg[root@kubenode3 yum.repos.d]##准备kubernetes.repo yum仓库文件[root@kubenode3 ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain610.83.32.146 kubemaster10.83.32.138 kubenode110.83.32.133 kubenode210.83.32.144 kubenode3#准备hosts文件[root@kubenode3 yum.repos.d]# getenforce Disabled#禁用selinux,可以通过设置/etc/selinux/config文件systemctl stop firewalldsystemctl disable firewalld#禁用防火墙yum install docker-ce kubelet kubeadm kubectl#安装docker、kubelet等curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io#安装docker镜像加速器,需要重启docker服务。systemctl restart Dockerdocker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3docker pull mirrorgooglecontainers/pause-amd64:3.1docker pull mirrorgooglecontainers/etcd-amd64:3.2.24docker pull carlziess/coredns-1.2.6docker pull quay.io/coreos/flannel:v0.11.0-amd64docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24docker tag carlziess/coredns-1.2.6 k8s.gcr.io/coredns:1.2.6#将运行的镜像提前下载到本地,因为使用kubeadm安装的k8s集群,api-server、controller-manager、kube-scheduler、etcd、flannel等组件需要运行为容器的形式,所以提前把镜像下载下来;vim /etc/sysctl.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1sysctl -p[root@kubenode3 yum.repos.d]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.[root@kubenode3 yum.repos.d]#

现在开始扩容计算节点

每个token只有24小时的有效期,如果没有有效的token,可以使用如下命令创建

[root@kubemaster ~]# kubeadm token createfv93ud.33j7oxtdmodwfn7f[root@kubemaster ~]##创建tokenopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e#查看Kubernetes认证的SHA256加密字符串swapoff -a#关闭swap分区kubeadm join 10.83.32.146:6443 --token fv93ud.33j7oxtdmodwfn7f --discovery-token-ca-cert-hash sha256:c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e --ignore-preflight-errors=Swap#加入kubernetes集群[root@kubemaster ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONkubemaster Ready master 18d v1.13.3kubenode1 Ready 17d v1.13.3kubenode2 Ready 17d v1.13.3kubenode3 Ready 2m22s v1.13.4#查看节点状态,发现已经成功加入kubenode3节点

上一篇:Curl运维命令 - 日常用法总结
下一篇:2005年的第一场雪
相关文章

 发表评论

暂时没有评论,来抢沙发吧~