mirror of
https://github.com/TECHNOFAB11/zfs-localpv.git
synced 2026-02-02 15:45:13 +01:00
chore(doc): Updated the readme/docs with zfs-driver scheduler information (#308)
Signed-off-by: w3aman <aman.gupta@mayadata.io>
This commit is contained in:
parent
db080d63e0
commit
3cd6caeca2
2 changed files with 16 additions and 7 deletions
|
|
@ -97,9 +97,9 @@ We can also install it via kustomize using `kubectl apply -k deploy/yamls`, chec
|
|||
|
||||
**NOTE:** For some Kubernetes distributions, the `kubelet` directory must be changed at all relevant places in the YAML powering the operator (both the `openebs-zfs-controller` and `openebs-zfs-node`).
|
||||
|
||||
For `microk8s`, we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`, we need to replace `/var/lib/kubelet/` with `/var/snap/microk8s/common/var/lib/kubelet/` at all the places in the operator yaml and then we can apply it on microk8s.
|
||||
- For `microk8s`, we need to change the kubelet directory to `/var/snap/microk8s/common/var/lib/kubelet/`, we need to replace `/var/lib/kubelet/` with `/var/snap/microk8s/common/var/lib/kubelet/` at all the places in the operator yaml and then we can apply it on microk8s.
|
||||
|
||||
For `k0s`, the default directory (`/var/lib/kubelet`) should be changed to `/var/lib/k0s/kubelet`.
|
||||
- For `k0s`, the default directory (`/var/lib/kubelet`) should be changed to `/var/lib/k0s/kubelet`.
|
||||
|
||||
Verify that the ZFS driver Components are installed and running using below command :
|
||||
|
||||
|
|
@ -233,12 +233,11 @@ Please note that the provisioner name for ZFS driver is "zfs.csi.openebs.io", we
|
|||
|
||||
##### Scheduler
|
||||
|
||||
The ZFS driver has a scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver has
|
||||
VolumeWeighted scheduling algorithm, in which it will try to find a ZFS pool which has less number of volumes provisioned in it from all the nodes where the ZFS pools are available.
|
||||
The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. To know about how to select scheduler via storage-class See [this](https://github.com/openebs/zfs-localpv/blob/master/docs/storageclasses.md#storageclass-with-k8s-scheduler).
|
||||
Once it is able to find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume
|
||||
CR will get all the information for this object and creates a ZFS dataset(zvol) with the given ZFS property on the mentioned node.
|
||||
|
||||
The scheduling algorithm currently only accounts for the number of ZFS volumes and does not account for other factors like available cpu or memory while making scheduling decisions.
|
||||
The scheduling algorithm currently only accounts for either the number of ZFS volumes or total capacity occupied from a zpool and does not account for other factors like available cpu or memory while making scheduling decisions.
|
||||
So if you want to use node selector/affinity rules on the application pod, or have cpu/memory constraints, kubernetes scheduler should be used.
|
||||
To make use of kubernetes scheduler, you can set the `volumeBindingMode` as `WaitForFirstConsumer` in the storage class.
|
||||
This will cause a delayed binding, i.e kubernetes scheduler will schedule the application pod first and then it will ask the ZFS driver to create the PV.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue