Updating the doc with supported storageclass parameters
Also updated the readme with operator yaml to install the latest release
instead of ci release. Also corrected few formatting in the doc.
Adding hackmd notes from community meetings.
Signed-off-by: Pawan <pawan@mayadata.io>
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.
The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.
Signed-off-by: Pawan <pawan@mayadata.io>
adding grafana dashboard for ZFS Local PV that shows the following metrics:
- Volume Capacity (used space percentage)
- ARC Size, Hits, Misses
- L2ARC Size, Hits, Misses
- ZPOOL Read/Write IOs
- ZPOOL Read/Write time
This dashboard was inspired by https://grafana.com/grafana/dashboards/7845
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds the support for use to specify custom labels to the kubernetes nodes and use them in the allowedToplogoies section of the StorageClass.
Few notes:
- This PR depends on the CSI driver's capability to support custom topology keys.
- label on the nodes should be added first and then deploy the driver to make it aware of
all the labels that node has. If labels are added after ZFS-LocalPV driver
has been deployed, a restart all the node csi driver agents is required so that the driver
can pick the labels and add them as supported topology keys.
- if storageclass is using Immediate binding mode and topology key is not mentioned
then all the nodes should be labeled using same key, that means:
- same key should be present on all nodes, nodes can have different values for those keys.
- If nodes are labeled with different keys i.e. some nodes are having different keys, then ZFSPV's default scheduler can not effictively do the volume count based scheduling. In this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.
Signed-off-by: Pawan <pawan@mayadata.io>
Provide sample instructions on setting up prometheus via prometheus-operator and then configuring a sample rule to monitor the volume space utilization, and once available space is less than 10%, it will start firing the alert.
```
100 * kubelet_volume_stats_available_bytes{job="kubelet"}
/
kubelet_volume_stats_capacity_bytes{job="kubelet"}
< 10
```
Signed-off-by: Pawan <pawan@mayadata.io>
so that no two pods get scheduled on the same node. Also keeping
the default replica to 1, if HA feature is required, we can change
replica count to 2(or more).
Signed-off-by: Pawan <pawan@mayadata.io>