k8s系列-16-worker节点安装

网友投稿 642 2022-11-05

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

k8s系列-16-worker节点安装

配置container-runtime

PS:该步骤需要在两个worker节点上分别执行,我的两个worker节点分别是node2和node3。

1、下载软件

# 设定版本号[root@node2 ~]# VERSION=1.4.3# 下载[root@node2 ~]# wget https://github.com/containerd/containerd/releases/download/v${VERSION}/cri-containerd-cni-${VERSION}-linux-amd64.tar.gz

2、解压软件包

# 解压[root@node2 ~]# tar -xvf cri-containerd-cni-${VERSION}-linux-amd64.tar.gz# 复制文件[root@node2 ~]# cp etc/crictl.yaml etc/[root@node2 ~]# cp etc/systemd/system/containerd.service etc/systemd/system/[root@node2 ~]# cp -r usr

3、containerd配置文件

# 创建配置目录[root@node2 ~]# mkdir -p etc/containerd# 生成配置文件[root@node2 ~]# containerd config default > etc/containerd/config.toml# 自选配置,比如说你有挂载磁盘比较大的目录,你可以修改下存储目录等[root@node2 ~]# vim etc/containerd/config.toml

4、启动containerd服务

[root@node2 ~]# systemctl enable containerdCreated symlink from etc/systemd/system/multi-user.target.wants/containerd.service to etc/systemd/system/containerd.service.[root@node2 ~]# systemctl restart containerd[root@node2 ~]# systemctl status containerd● containerd.service - containerd container runtime Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled) Active: active (running) since 六 2022-03-19 22:27:56 CST; 23s ago Docs: https://containerd.io Process: 8034 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Main PID: 8038 (containerd) Tasks: 8 Memory: 19.4M CGroup: /system.slice/containerd.service └─8038 usr/local/bin/containerd3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.154498095+08:00" level=info msg="Start subscribing containerd event"3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155149545+08:00" level=info msg="Start recovering state"3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155233071+08:00" level=info msg="Start event monitor"3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155269850+08:00" level=info msg="Start snapshots syncer"3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155279889+08:00" level=info msg="Start cni network conf syncer"3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155284057+08:00" level=info msg="Start streaming server"3月 19 22:27:56 node2 systemd[1]: Started containerd container runtime.3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.164126975+08:00" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.164164104+08:00" level=info msg=serving... address=/run/containerd/containerd.sock3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.164200622+08:00" level=info msg="containerd successfully booted in 0.090964s"[root@node2 ~]#

配置kubelet

PS:该步骤需要在两个worker节点上分别执行

1、准备kubelet配置

# 创建存放证书的目录[root@node2 ~]# mkdir -p etc/kubernetes/ssl/# 申明该节点的hostname[root@node2 ~]# HOSTNAME=node2# 复制证书到相关目录# 如果你的架构完全按照我的文档来的,那么下一步你在node2节点上执行可能会报错# 因为在配置master节点的时候,已经把相关证书移动到执行目录里面了,直接pass该报错即可[root@node2 ~]# mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem ca.pem ca-key.pem etc/kubernetes/ssl/# 继续移动[root@node2 ~]# mv ${HOSTNAME}.kubeconfig etc/kubernetes/kubeconfig# 申明该节点的IP地址[root@node2 ~]# IP=192.168.112.131# 写入配置文件[root@node2 ~]# cat < etc/kubernetes/kubelet-config.yamlkind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/etc/kubernetes/ssl/ca.pem"authorization: mode: WebhookclusterDomain: "cluster.local"clusterDNS: - "169.254.25.10"podCIDR: "10.200.0.0/16"address: ${IP}readOnlyPort: 0staticPodPath: etc/kubernetes/manifestshealthzPort: 10248healthzBindAddress: 127.0.0.1kubeletCgroups: systemd/system.sliceresolvConf: "/etc/resolv.conf"runtimeRequestTimeout: "15m"kubeReserved: cpu: 200m memory: 512MtlsCertFile: "/etc/kubernetes/ssl/${HOSTNAME}.pem"tlsPrivateKeyFile: "/etc/kubernetes/ssl/${HOSTNAME}-key.pem"EOF[root@node2 ~]#

2、配置kubelet服务

[root@node2 ~]# cat < etc/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=containerd.serviceRequires=containerd.service[Service]ExecStart=/usr/local/bin/kubelet \\ --config=/etc/kubernetes/kubelet-config.yaml \\ --container-runtime=remote \\ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ --image-pull-progress-deadline=2m \\ --kubeconfig=/etc/kubernetes/kubeconfig \\ --network-plugin=cni \\ --node-ip=${IP} \\ --register-node=true \\ --v=2Restart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOF[root@node2 ~]#

配置nginx-proxy

该服务是干啥的呢?顾名思义是一个代理,那么代理的是什么呢?代理的是worker节点访问apiserver,是apiserver的一个高可用方案,是为了让每个服务都可以负载均衡的调用到apiserver上。

由于nginx-proxy代理的是apiserver的6443端口,但是我们的nginx也是用6443端口代理的,那么是不是必须在没有部署apiserver的节点上部署该服务呢,在咱们的集群中,哪些节点是没有部署apiserver的呢?简单了,worker节点没有部署apiserver,好,理解了哈?

该nginx-proxy在咱们的集群中,只需要在node3节点上部署,因为只有它是一个纯worker节点。

1、nginx配置文件

# 创建nginx配置文件目录[root@node3 ~]# mkdir -p etc/nginx# 指定master的IP地址[root@node3 ~]# MASTER_IPS=(192.168.112.130 192.168.112.131)# 生成nginx配置文件,需要注意的是# 如果你不和我的集群一致,不是两个master节点,需要修改如下的stream配置# 有几个master节点,就写几行,其他的都保持一致即可。[root@node3 ~]# cat < etc/nginx/nginx.conferror_log stderr notice;worker_processes 2;worker_rlimit_nofile 130048;worker_shutdown_timeout 10s;events { multi_accept on; use epoll; worker_connections 16384;}stream { upstream kube_apiserver { least_conn; server ${MASTER_IPS[0]}:6443; server ${MASTER_IPS[1]}:6443; } server { listen 127.0.0.1:6443; proxy_pass kube_apiserver; proxy_timeout 10m; proxy_connect_timeout 1s; }}http { aio threads; aio_write on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5m; keepalive_requests 100; reset_timedout_connection on; server_tokens off; autoindex off; server { listen 8081; location healthz { access_log off; return 200; } location stub_status { stub_status on; access_log off; } }}EOF[root@node3 ~]#

2、配置生成nginx-proxy的yaml文件

[root@node3 ~]# mkdir -p etc/kubernetes/manifests/[root@node3 ~]# cat < etc/kubernetes/manifests/nginx-proxy.yamlapiVersion: v1kind: Podmetadata: name: nginx-proxy namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile k8s-app: kube-nginxspec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical containers: - name: nginx-proxy image: docker.io/library/nginx:1.19 imagePullPolicy: IfNotPresent resources: requests: cpu: 25m memory: 32M securityContext: privileged: true livenessProbe: httpGet: path: healthz port: 8081 readinessProbe: httpGet: path: healthz port: 8081 volumeMounts: - mountPath: /etc/nginx name: etc-nginx readOnly: true volumes: - name: etc-nginx hostPath: path: /etc/nginxEOF[root@node3 ~]#

配置kube-proxy

PS:该步骤需要在两个worker节点上分别执行

1、生成配置文件

# 移动授权[root@node2 ~]# mv kube-proxy.kubeconfig /etc/kubernetes/# 创建kube-proxy的yaml文件[root@node2 ~]# cat < /etc/kubernetes/kube-proxy-config.yamlapiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationbindAddress: 0.0.0.0clientConnection: kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"clusterCIDR: "10.200.0.0/16"mode: ipvsEOF[root@node2 ~]#

2、配置kube-proxy管理服务

[root@node2 ~]# cat < /etc/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetes[Service]ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy-config.yamlRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOF[root@node2 ~]#

手动下载镜像

PS:该步骤需要在两个worker节点上分别执行

Plan A:

前面我们是不是说了,k8s的基础镜像是pause,所有的pod都是在他的基础上启动的,那么我们先手动下载一下该镜像。

[root@node2 ~]# crictl pull docker.io/library/k8s.gcr.io/pause:3.2[root@node2 ~]# crictl imagesIMAGE TAG IMAGE ID SIZEdocker.io/library/k8s.gcr.io/pause   3.2                 80d28bedfe5de       298kB[root@node2 ~]# # 然后改名[root@node2 ~]# ctr -n k8s.io i tag docker.io/library/k8s.gcr.io/pause:3.2 k8s.gcr.io/pause:3.2k8s.gcr.io/pause:3.2[root@node2 ~]#

然后再下载一下nginx的镜像,因为我们上面使用了nginx的镜像嘛;

[root@node2 ~]# crictl pull docker.io/library/nginx:1.19

Plan B:

[root@node2 ~]# ctr -n k8s.io image import nginx_1.19.tar.gz [root@node2 ~]# ctr -n k8s.io image import pause_3.2.tar.gz [root@node3 ~]# crictl imagesIMAGE TAG IMAGE ID SIZEdocker.io/library/nginx 1.19 f2f70adc5d89a 146MBk8s.gcr.io/pause 3.2 80d28bedfe5de 298kB[root@node3 ~]#

解决报错

在node2上新建如下目录,不然启动会报错。

[root@node2 ~]# mkdir -pv /etc/kubernetes/manifests/mkdir: 已创建目录 "/etc/kubernetes/manifests/"[root@node2 ~]#

启动服务

PS:该步骤需要在两个worker节点上分别执行

# 重置systemctl状态[root@node2 ~]# systemctl daemon-reload# 配置开机自启动[root@node2 ~]# systemctl enable kubelet kube-proxyCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /etc/systemd/system/kube-proxy.service.# 启动服务[root@node2 ~]# systemctl restart kubelet kube-proxy

验证服务

PS:如果kubelet启动报错了,查看下是否关闭了交换内存,交换内存必须关闭。

# 查看日志,确保没有报错[root@node2 ~]# journalctl -f -u kubelet[root@node2 ~]# journalctl -f -u kube-proxy# 查看服务启动状态[root@node2 ~]# systemctl status kubelet● kubelet.service - Kubernetes Kubelet Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 日 2022-03-20 00:53:26 CST; 15s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 19924 (kubelet) Tasks: 0 Memory: 24.6M CGroup: /system.slice/kubelet.service ‣ 19924 /usr/local/bin/kubelet --config=/etc/kubernetes/kubelet-config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd...3月 20 00:53:31 node2 kubelet[19924]: I0320 00:53:31.974421 19924 plugin_watcher.go:52] Plugin Watcher Start at /var/lib/kubelet/plugins_registry3月 20 00:53:31 node2 kubelet[19924]: I0320 00:53:31.974463 19924 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts3月 20 00:53:31 node2 kubelet[19924]: I0320 00:53:31.974466 19924 plugin_manager.go:114] Starting Kubelet Plugin Manager3月 20 00:53:32 node2 kubelet[19924]: I0320 00:53:31.999922 19924 kubelet_node_status.go:109] Node node2 was previously registered3月 20 00:53:32 node2 kubelet[19924]: I0320 00:53:32.000069 19924 kubelet_node_status.go:74] Successfully registered node node23月 20 00:53:32 node2 kubelet[19924]: I0320 00:53:32.009622 19924 setters.go:86] Using node IP: "192.168.112.131"3月 20 00:53:32 node2 kubelet[19924]: I0320 00:53:32.147429 19924 kubelet.go:1888] SyncLoop (ADD, "file"): ""3月 20 00:53:32 node2 kubelet[19924]: I0320 00:53:32.147490 19924 kubelet.go:1888] SyncLoop (ADD, "api"): ""3月 20 00:53:32 node2 kubelet[19924]: I0320 00:53:32.239142 19924 reconciler.go:157] Reconciler: start to sync state3月 20 00:53:36 node2 kubelet[19924]: E0320 00:53:36.989535 19924 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPl...initializedHint: Some lines were ellipsized, use -l to show in full.[root@node2 ~]# [root@node2 ~]# systemctl status kube-proxy● kube-proxy.service - Kubernetes Kube Proxy Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 日 2022-03-20 00:35:16 CST; 20min ago Docs: https://github.com/kubernetes/kubernetes Main PID: 16321 (kube-proxy) Tasks: 5 Memory: 48.1M CGroup: /system.slice/kube-proxy.service └─16321 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy-config.yaml3月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.363486 16321 conntrack.go:52] Setting nf_conntrack_max to 1310723月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.369941 16321 conntrack.go:83] Setting conntrack hashsize to 327683月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.370186 16321 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 864003月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.370229 16321 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 36003月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.370494 16321 config.go:315] Starting service config controller3月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.370502 16321 shared_informer.go:240] Waiting for caches to sync for service config3月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.370516 16321 config.go:224] Starting endpoint slice config controller3月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.370521 16321 shared_informer.go:240] Waiting for caches to sync for endpoint slice config3月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.471286 16321 shared_informer.go:247] Caches are synced for endpoint slice config3月 20 00:35:51 node2 kube-proxy[16321]: I0320 00:35:51.471357 16321 shared_informer.go:247] Caches are synced for service config[root@node2 ~]#

同时在节点三上执行如下命令查看,应该启动了一个pod:

[root@node3 ~]# crictl psCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID33a9d3fb9d4d4 f2f70adc5d89a 3 minutes ago Running nginx-proxy 0 aa6a85269893d[root@node3 ~]#

至此,本文结束,细心的同学可能注意到了一个报错,说是没有网络插件,那么下一篇我们就讲解网络插件的部署。

往期推荐k8s系列-13-生成证书和各组件的认证配置五分钟学会linux磁盘共享之nfs技术如何将不同linux服务器的目录内容进行双向同步

上一篇:软件测试培训之定位bug的思路
下一篇:软件测试培训之服务端测试和客户端测试区别
相关文章

 发表评论

暂时没有评论,来抢沙发吧~