Kubernetes ceph rbd 基于 storageclass动态生成pv

网友投稿 1372 2022-11-30

Kubernetes ceph rbd 基于 storageclass动态生成pv

Kubernetes ceph rbd 基于 storageclass动态生成pv

基于storageclass动态生成pv

1、创建rbd的供应商  provisioner   ceph.com/rbd(环境变量里面的值)

#把rbd-provisioner.tar.gz上传,手动解压,这里面封装的是镜像

[root@master ceph]# docker load -i rbd-provisioner.tar.gz 1d31b5806ba4: Loading layer [==================================================>] 208.3MB/208.3MB499d93e0e038: Loading layer [==================================================>] 164.1MB/164.1MB7c9bb3d61493: Loading layer [==================================================>] 44.52MB/44.52MBLoaded image: quay.io/xianchao/external_storage/rbd-provisioner:v1

[root@master rbd-provisioner]# cat rbd-provisioner.yaml kind: ClusterRole #定义了一个ClusterRole,可以对哪些资源做操作apiVersion: rbac.authorization.k8s.io/v1metadata: name: rbd-provisionerrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"]---kind: ClusterRoleBinding #定义了一个clusterrolebinding,将下面的serviceAccount: rbd-provisioner 绑定apiVersion: rbac.authorization.k8s.io/v1metadata: name: rbd-provisionersubjects: - kind: ServiceAccount name: rbd-provisioner namespace: defaultroleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: name: rbd-provisionerrules:- apiGroups: [""] resources: ["secrets"] verbs: ["get"]- apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: rbd-provisionerroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisionersubjects:- kind: ServiceAccount name: rbd-provisioner namespace: default---apiVersion: apps/v1kind: Deploymentmetadata: name: rbd-provisionerspec: selector: matchLabels: app: rbd-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: quay.io/xianchao/external_storage/rbd-provisioner:v1 imagePullPolicy: IfNotPresent env: - name: PROVISIONER_NAME #供应商名字 value: ceph.com/rbd serviceAccount: rbd-provisioner ---apiVersion: v1kind: ServiceAccountmetadata: name: rbd-provisioner

[root@master rbd-provisioner]# kubectl get podNAME READY STATUS RESTARTS AGErbd-provisioner-6bbc95cd74-g6lcd 1/1 Running 0 9s

2、创建ceph-secret

#创建pool池

[root@master1-admin ceph]# ceph osd pool create k8stest1 56pool 'k8stest1' created[root@master1-admin ceph]# ceph osd pool lsrbdcephfs_datacephfs_metadatak8srbd1k8stestk8stest1You have new mail in /var/spool/mail/root

[root@master1-admin ~]# ceph auth get-key client.admin | base64QVFDOWF4eGhPM0UzTlJBQUJZZnVCMlZISVJGREFCZHN0UGhMc3c9PQ==[root@master rbd-provisioner]# cat ceph-secret-1.yaml apiVersion: v1kind: Secretmetadata: name: ceph-secret-1type: "ceph.com/rbd"data: key: QVFDOWF4eGhPM0UzTlJBQUJZZnVCMlZISVJGREFCZHN0UGhMc3c9PQ==

[root@master rbd-provisioner]# kubectl get secretNAME TYPE DATA AGEceph-secret Opaque 1 19hceph-secret-1 ceph.com/rbd 1 2m33sdefault-token-cwbdx kubernetes.io/service-account-token 3 91dnfs-client-provisioner-token-plww9 kubernetes.io/service-account-token 3 19dqingcloud kubernetes.io/dockerconfigjson 1 91drbd-provisioner-token-82bql kubernetes.io/service-account-token 3 10m

3、创建storageclass

[root@master ceph]# cat rbd-provisioner/storageclass.yaml apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: k8s-rbdprovisioner: ceph.com/rbdparameters: monitors: 192.168.0.5:6789,192.168.0.6:6789,192.168.0.7:6789 adminId: admin adminSecretName: ceph-secret-1 pool: k8stest1 userId: admin userSecretName: ceph-secret-1 fsType: xfs imageFormat: "2" imageFeatures: "layering"# pool: k8stest1 这个pool池子是之前创建的[root@master rbd-provisioner]# kubectl apply -f storageclass.yaml storageclass.storage.k8s.io/k8s-rbd created[root@master rbd-provisioner]# kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEk8s-rbd ceph.com/rbd Delete Immediate false 7s

这个时候存储类在申请pv的时候就会找到ceph.com/rbd供应商,然后通过该供应商从集群里面去划分出来一个pv。

4、创建pvc

invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported(可以看到rbd不支持[ReadWriteMany)

[root@master rbd-provisioner]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEceph-pvc Bound ceph-pv 1Gi RWX 18hexample-local-claim Bound example-pv 5Gi RWO local-storage 63drbd-pvc Pending k8s-rbd 5s[root@master rbd-provisioner]# kubectl describe pvc rbd-pvc Name: rbd-pvcNamespace: defaultStorageClass: k8s-rbdStatus: PendingVolume: Labels: Annotations: volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbdFinalizers: [kubernetes.io/pvc-protection]Capacity: Access Modes: VolumeMode: FilesystemMounted By: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 10s (x2 over 25s) ceph.com/rbd_rbd-provisioner-6bbc95cd74-g6lcd_ee50d24f-015f-11ec-a6a6-9e54668c01e4 External provisioner is provisioning volume for claim "default/rbd-pvc" Warning ProvisioningFailed 10s (x2 over 25s) ceph.com/rbd_rbd-provisioner-6bbc95cd74-g6lcd_ee50d24f-015f-11ec-a6a6-9e54668c01e4 failed to provision volume with StorageClass "k8s-rbd": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported Normal ExternalProvisioning 10s (x3 over 25s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ceph.com/rbd" or manually created by system administrator[root@master rbd-provisioner]# kubectl delete -f ceph-secret-1.yaml rbd-provisioner.yaml rbd-pvc.yaml storageclass.yaml [root@master rbd-provisioner]# kubectl delete -f rbd-pvc.yaml persistentvolumeclaim "rbd-pvc" deleted[root@master rbd-provisioner]# vim rbd-pvc.yaml [root@master rbd-provisioner]# cat rbd-pvc.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: rbd-pvcspec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 1Gi storageClassName: k8s-rbd[root@master rbd-provisioner]# kubectl apply -f rbd-pvc.yaml persistentvolumeclaim/rbd-pvc created[root@master rbd-provisioner]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEceph-pvc Bound ceph-pv 1Gi RWX 19hexample-local-claim Bound example-pv 5Gi RWO local-storage 63drbd-pvc Bound pvc-5eef6286-c89f-421a-88d1-1a6593423087 1Gi RWO k8s-rbd 6s

5、创建pod,挂载pvc

[root@master rbd-provisioner]# cat pod-sto.yaml apiVersion: v1kind: Podmetadata: labels: test: rbd-pod name: ceph-rbd-podspec: containers: - name: ceph-rbd-nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: ceph-rbd mountPath: /mnt readOnly: false volumes: - name: ceph-rbd persistentVolumeClaim: claimName: rbd-pvc[root@master rbd-provisioner]# kubectl get podNAME READY STATUS RESTARTS AGEceph-rbd-pod 1/1 Running 0 26s

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:kubernetes pod 挂载 ceph rbd
下一篇:Kubernetes CentOS7.4 系统内核升级 修复 K8S 内存泄露问题
相关文章

 发表评论

暂时没有评论,来抢沙发吧~