k8s is very slow in attaching the volumes when dealing with the large number of volume attachment object. (k8s issue https://github.com/kubernetes/kubernetes/issues/84169) The volumeattachment is not required for ZFSPV, so avoid creation of attachment object, also removed the csi-attacher container as this is also not needed as it acts on volumeattachment object. k8s is very slow in attaching the volumes when dealing with the large number of volume attachment object : k8s issue https://github.com/kubernetes/kubernetes/issues/84169). Volumeattachment is a CR created just to tell the watcher of it which is csi-attacher, that it has to call the Controller Publish/Unpublish grpc. Which does all the tasks to attach the volumes to a node for example call to the DigitalOcean Block Storage API service to attach a created volume to a specified node. Since for ZFSPV, volume is already present locally, nothing needs to done in Controller Publish/Unpublish, so it is good to remove them. so avoiding creation of attachment object in this change, also removed the csi-attacher container as this is also not needed as it acts on volumeattachment object. Removed csi-cluster-driver-registrar container also as it is deprecated and not needed anymore. We are using csidriver beta CRDs so minimum k8s version required is 1.14+. Signed-off-by: Pawan <pawan@mayadata.io> |
||
|---|---|---|
| .. | ||
| cleanup.sh | ||
| crd.yaml | ||
| README.md | ||
| upgrade.sh | ||
From zfs-driver:v0.6 version ZFS-LocalPV related CRs are now grouped together in its own group called zfs.openebs.io. Here steps are mentioned for how to upgrade for refactoring the CRDs. Please do not provision/deprovision any volume during the upgrade.
steps to upgrade:-
- Apply the new CRD
$ kubectl apply -f upgrade/crd.yaml
customresourcedefinition.apiextensions.k8s.io/zfsvolumes.zfs.openebs.io created
customresourcedefinition.apiextensions.k8s.io/zfssnapshots.zfs.openebs.io created
- run upgrade.sh
$ sh upgrade/upgrade.sh
zfsvolume.zfs.openebs.io/pvc-086a8608-9057-42df-b684-ee4ae8d35f71 created
zfsvolume.zfs.openebs.io/pvc-5286d646-c93d-4413-9707-fd95ebaae8c0 created
zfsvolume.zfs.openebs.io/pvc-74abefb8-8423-4b13-a607-7184ef088fb5 created
zfsvolume.zfs.openebs.io/pvc-82368c44-eee8-47ee-85a6-633a8023faa8 created
zfssnapshot.zfs.openebs.io/snapshot-dc61a056-f495-482b-8e6e-e7ddc4c13f47 created
zfssnapshot.zfs.openebs.io/snapshot-f9db91ea-529e-4dac-b2b8-ead045c612da created
Please note that if you have modified the OPENEBS_NAMESPACE env in the driver's deployment to other namespace. Then you have to pass the namespace as an argument to the upgrade.sh script sh upgrade/upgrash.sh [namespace].
- upgrade the driver to v0.6
$ kubectl apply -f https://github.com/openebs/zfs-localpv/blob/v0.6.x/deploy/zfs-operator.yaml
For future releases if you want to upgrade from v0.4 or v0.5 to the newer version replace v0.6.x to the desired version. Check everything is good after upgrading the zfs-driver. Then run the cleanup script to remove old CRDs
- run cleanup.sh
$ sh upgrade/cleanup.sh
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
zfsvolume.openebs.io/pvc-086a8608-9057-42df-b684-ee4ae8d35f71 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
zfsvolume.openebs.io/pvc-5286d646-c93d-4413-9707-fd95ebaae8c0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
zfsvolume.openebs.io/pvc-74abefb8-8423-4b13-a607-7184ef088fb5 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
svolume.openebs.io/pvc-82368c44-eee8-47ee-85a6-633a8023faa8 configured
zfsvolume.openebs.io "pvc-086a8608-9057-42df-b684-ee4ae8d35f71" deleted
zfsvolume.openebs.io "pvc-5286d646-c93d-4413-9707-fd95ebaae8c0" deleted
zfsvolume.openebs.io "pvc-74abefb8-8423-4b13-a607-7184ef088fb5" deleted
zfsvolume.openebs.io "pvc-82368c44-eee8-47ee-85a6-633a8023faa8" deleted
customresourcedefinition.apiextensions.k8s.io "zfsvolumes.openebs.io" deleted
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
zfssnapshot.openebs.io/snapshot-dc61a056-f495-482b-8e6e-e7ddc4c13f47 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
zfssnapshot.openebs.io/snapshot-f9db91ea-529e-4dac-b2b8-ead045c612da configured
zfssnapshot.openebs.io "snapshot-dc61a056-f495-482b-8e6e-e7ddc4c13f47" deleted
zfssnapshot.openebs.io "snapshot-f9db91ea-529e-4dac-b2b8-ead045c612da" deleted
customresourcedefinition.apiextensions.k8s.io "zfssnapshots.openebs.io" deleted
Please note that if you have modified the OPENEBS_NAMESPACE env in the driver's deployment to other namespace. Then you have to pass the namespace as an argument to the cleanup.sh script sh upgrade/cleanup.sh [namespace].
- restart kube-controller [optional] kube-controller-manager might be using stale volumeattachment resources, it might get flooded with the error logs. Restarting kube-controller will fix it.