Charmed K8s - Volume not created by Ceph storage class

Я развернул на месте baremetal K8s Cloud с charmed Kubernetes. И я использую Ceph для хранения данных.

Развертывание кластера прошло хорошо, но когда я хочу создать том (через PVC), том остается в состоянии Pending. В логах появляется ошибка, указывающая на то, что уже есть операция для этого ID тома, а затем превышается таймаут.

Я следовал более или менее руководству от Canonical (https://ubuntu.com/tutorials/how-to-build-a-ceph-backed-kubernetes-cluster#1-overview) без успеха.

Вот еще немного информации :

Пулы ceph :

root@infra01:~/k8s-test# juju run-action --wait ceph-mon/leader list-pools
unit-ceph-mon-0:
  UnitId: ceph-mon/0
  id: "28"
  results:
    message: |
      1 device_health_metrics
      2 xfs-pool
      3 ext4-pool
  status: completed
  timing:
    completed: 2021-04-27 14:34:02 +0000 UTC
    enqueued: 2021-04-27 14:34:01 +0000 UTC
    started: 2021-04-27 14:34:01 +0000 UTC

Класс хранилища :

root@infra01:~# kubectl describe sc
Name:            ceph-ext4
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"cdk-addons":"true"},"name":"ceph-ext4"},"mountOptions":["discard"],"parameters":{"clusterID":"4898f638-a1a7-11eb-a288-5bcd87ddd233","csi.storage.k8s.io/controller-expand-secret-name":"csi-rbd-secret","csi.storage.k8s.io/controller-expand-secret-namespace":"default","csi.storage.k8s.io/fstype":"ext4","csi.storage.k8s.io/node-stage-secret-name":"csi-rbd-secret","csi.storage.k8s.io/node-stage-secret-namespace":"default","csi.storage.k8s.io/provisioner-secret-name":"csi-rbd-secret","csi.storage.k8s.io/provisioner-secret-namespace":"default","imageFeatures":"layering","pool":"ext4-pool"},"provisioner":"rbd.csi.ceph.com","reclaimPolicy":"Delete"}

Provisioner:           rbd.csi.ceph.com
Parameters:            clusterID=4898f638-a1a7-11eb-a288-5bcd87ddd233,csi.storage.k8s.io/controller-expand-secret-name=csi-rbd-secret,csi.storage.k8s.io/controller-expand-secret-namespace=default,csi.storage.k8s.io/fstype=ext4,csi.storage.k8s.io/node-stage-secret-name=csi-rbd-secret,csi.storage.k8s.io/node-stage-secret-namespace=default,csi.storage.k8s.io/provisioner-secret-name=csi-rbd-secret,csi.storage.k8s.io/provisioner-secret-namespace=default,imageFeatures=layering,pool=ext4-pool
AllowVolumeExpansion:  True
MountOptions:
  discard
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>


Name:            ceph-xfs
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"cdk-addons":"true"},"name":"ceph-xfs"},"mountOptions":["discard"],"parameters":{"clusterID":"4898f638-a1a7-11eb-a288-5bcd87ddd233","csi.storage.k8s.io/controller-expand-secret-name":"csi-rbd-secret","csi.storage.k8s.io/controller-expand-secret-namespace":"default","csi.storage.k8s.io/fstype":"xfs","csi.storage.k8s.io/node-stage-secret-name":"csi-rbd-secret","csi.storage.k8s.io/node-stage-secret-namespace":"default","csi.storage.k8s.io/provisioner-secret-name":"csi-rbd-secret","csi.storage.k8s.io/provisioner-secret-namespace":"default","imageFeatures":"layering","pool":"xfs-pool"},"provisioner":"rbd.csi.ceph.com","reclaimPolicy":"Delete"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           rbd.csi.ceph.com
Parameters:            clusterID=4898f638-a1a7-11eb-a288-5bcd87ddd233,csi.storage.k8s.io/controller-expand-secret-name=csi-rbd-secret,csi.storage.k8s.io/controller-expand-secret-namespace=default,csi.storage.k8s.io/fstype=xfs,csi.storage.k8s.io/node-stage-secret-name=csi-rbd-secret,csi.storage.k8s.io/node-stage-secret-namespace=default,csi.storage.k8s.io/provisioner-secret-name=csi-rbd-secret,csi.storage.k8s.io/provisioner-secret-namespace=default,imageFeatures=layering,pool=xfs-pool
AllowVolumeExpansion:  True
MountOptions:
  discard
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>

И утверждение о постоянном томе :

root@infra01:~/k8s-test# kubectl describe pvc myvol
Name:          myvol
Namespace:     pa-cnfdevops-paccard
StorageClass:  ceph-xfs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                    From                                                                                              Message
  ----     ------                ----                   ----                                                                                              -------
  Warning  ProvisioningFailed    34m                    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-5ddcl_a95b48b7-31eb-4cc5-bcbe-2715dbd43451  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   Provisioning          32m (x9 over 37m)      rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-5ddcl_a95b48b7-31eb-4cc5-bcbe-2715dbd43451  External provisioner is provisioning volume for claim "pa-cnfdevops-paccard/myvol"
  Warning  ProvisioningFailed    32m (x8 over 34m)      rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-5ddcl_a95b48b7-31eb-4cc5-bcbe-2715dbd43451  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-0414bcd6-38ff-4523-b2bd-93e3ef58eea5 already exists
  Normal   ExternalProvisioning  31m (x26 over 36m)     persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Normal   ExternalProvisioning  28m (x5 over 29m)      persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Warning  ProvisioningFailed    27m                    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-9r2nh_11ff8e47-a296-401e-8f08-799f22fd5a02  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   Provisioning          4m4s (x14 over 30m)    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-9r2nh_11ff8e47-a296-401e-8f08-799f22fd5a02  External provisioner is provisioning volume for claim "pa-cnfdevops-paccard/myvol"
  Warning  ProvisioningFailed    4m4s (x12 over 27m)    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-9r2nh_11ff8e47-a296-401e-8f08-799f22fd5a02  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-0414bcd6-38ff-4523-b2bd-93e3ef58eea5 already exists
  Normal   ExternalProvisioning  2m53s (x105 over 27m)  persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator

Кто-нибудь из вас знает, что может вызвать это и как это исправить?

Заранее большое спасибо!

1
задан 27 April 2021 в 22:10

1 ответ

У меня аналогичная проблема на vsphere. См. Мой вопрос: Кубернеты Ceph Charmed на VMware не работают .

Наилучшее

Стефано

0
ответ дан 7 May 2021 в 17:43

Другие вопросы по тегам:

Похожие вопросы: