kubernetes(k8s) 存储动态挂载

网友投稿 700 2022-10-12

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

kubernetes(k8s) 存储动态挂载

使用 nfs 文件系统 实现kubernetes存储动态挂载

1. 安装服务端和客户端

root@hello:~# apt install nfs-kernel-server nfs-common

其中 nfs-kernel-server 为服务端, nfs-common 为客户端。

2. 配置 nfs 共享目录

root@hello:~# mkdir nfsroot@hello:~# sudo vim etc/exports/nfs *(rw,sync,no_root_squash,no_subtree_check)

各字段解析如下:/nfs: 要共享的目录:指定可以访问共享目录的用户 ip, * 代表所有用户。192.168.3. 指定网段。192.168.3.29 指定 ip。rw:可读可写。如果想要只读的话,可以指定 ro。sync:文件同步写入到内存与硬盘中。async:文件会先暂存于内存中,而非直接写入硬盘。no_root_squash:登入 nfs 主机使用分享目录的使用者,如果是 root 的话,那么对于这个分享的目录来说,他就具有 root 的权限!这个项目『极不安全』,不建议使用!但如果你需要在客户端对 nfs 目录进行写入操作。你就得配置 no_root_squash。方便与安全不可兼得。root_squash:在登入 nfs 主机使用分享之目录的使用者如果是 root 时,那么这个使用者的权限将被压缩成为匿名使用者,通常他的 UID 与 GID 都会变成 nobody 那个系统账号的身份。subtree_check:强制 nfs 检查父目录的权限(默认)no_subtree_check:不检查父目录权限

配置完成后,执行以下命令导出共享目录,并重启 nfs 服务:

root@hello:~# exportfs -a root@hello:~# systemctl restart nfs-kernel-serverroot@hello:~#root@hello:~# systemctl enable nfs-kernel-server

客户端挂载

root@hello:~# apt install nfs-commonroot@hello:~# mkdir -p nfs/root@hello:~# mount -t nfs 192.168.1.66:/nfs/ nfs/

root@hello:~# df -hTFilesystem Type Size Used Avail Use% Mounted onudev devtmpfs 7.8G 0 7.8G 0% devtmpfs tmpfs 1.6G 2.9M 1.6G 1% run/dev/mapper/ubuntu--vg-ubuntu--lv ext4 97G 9.9G 83G 11% tmpfs tmpfs 7.9G 0 7.9G 0% dev/shmtmpfs tmpfs 5.0M 0 5.0M 0% run/locktmpfs tmpfs 7.9G 0 7.9G 0% sys/fs/cgroup/dev/loop0 squashfs 56M 56M 0 100% snap/core18/2128/dev/loop1 squashfs 56M 56M 0 100% snap/core18/2246/dev/loop3 squashfs 33M 33M 0 100% snap/snapd/12704/dev/loop2 squashfs 62M 62M 0 100% snap/core20/1169/dev/loop4 squashfs 33M 33M 0 100% snap/snapd/13640/dev/loop6 squashfs 68M 68M 0 100% snap/lxd/21835/dev/loop5 squashfs 71M 71M 0 100% snap/lxd/21029/dev/sda2 ext4 976M 107M 803M 12% boottmpfs tmpfs 1.6G 0 1.6G 0% run/user/0192.168.1.66:/nfs nfs4 97G 6.4G 86G 7% nfs

创建配置默认存储

[root@k8s-master-node1 ~/yaml]# vim nfs-storage.yaml[root@k8s-master-node1 ~/yaml]#[root@k8s-master-node1 ~/yaml]# cat nfs-storage.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true"provisioner: k8s-sigs.io/nfs-subdir-external-provisionerparameters: archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份---apiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.1.66 ## 指定自己nfs服务器地址 - name: NFS_PATH value: /nfs/ ## nfs服务器共享的目录 volumes: - name: nfs-client-root nfs: server: 192.168.1.66 path: /nfs/---apiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultsubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io

创建

[root@k8s-master-node1 ~/yaml]# kubectl apply -f nfs-storage.yamlstorageclass.storage.k8s.io/nfs-storage createddeployment.apps/nfs-client-provisioner createdserviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created[root@k8s-master-node1 ~/yaml]#

查看是否创建默认存储

[root@k8s-master-node1 ~/yaml]# kubectl get storageclasses.storage.k8s.ioNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEnfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 100s[root@k8s-master-node1 ~/yaml]#

创建pvc进行测试

[root@k8s-master-node1 ~/yaml]# vim pvc.yaml[root@k8s-master-node1 ~/yaml]# cat pvc.yamlkind: PersistentVolumeClaimapiVersion: v1metadata: name: nginx-pvcspec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi[root@k8s-master-node1 ~/yaml]#[root@k8s-master-node1 ~/yaml]# kubectl apply -f pvc.yamlpersistentvolumeclaim/nginx-pvc created[root@k8s-master-node1 ~/yaml]#

查看pvc

[root@k8s-master-node1 ~/yaml]#[root@k8s-master-node1 ~/yaml]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEnginx-pvc Bound pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX nfs-storage 4s[root@k8s-master-node1 ~/yaml]#

查看pv

[root@k8s-master-node1 ~/yaml]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX Delete Bound default/nginx-pvc nfs-storage 103s[root@k8s-master-node1 ~/yaml]#

上一篇:当我们在监控 Kubernetes Pod 时,到底在监控什么
下一篇:kubernetes核心实战(四)--- Deployments
相关文章

 发表评论

暂时没有评论,来抢沙发吧~