Set default fstype to ext4 in csi-provisioner. This will be helpful when
fsType is not mention in storageclass.
Signed-off-by: Shovan Maity <shovan.cse91@gmail.com>
Usecase: A node in the Kubernetes cluster is replaced with a new node. The
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node.
Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver
will pick the value from the nodes and set the affinity.
Once the old node is removed from the cluster, the K8s scheduler will continue
to schedule applications on the old node only.
User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are
scheduled to the node now.
Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.
Other updates:
* adding faq doc
* renaming the config variable to nodename
Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
* Update docs/faq.md
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.
The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.
Signed-off-by: Pawan <pawan@mayadata.io>
Encrypted pool does not allow the volume to be pre created for the
restore purpose. Here changing the design to do the restore first
and then create the ZFSVolume object which will bind the volume
already created while doing restore.
Signed-off-by: Pawan <pawan@mayadata.io>
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.
Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly.
Signed-off-by: Pawan <pawan@mayadata.io>
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.
Validating the server address as :
^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$
which is :
<any number>.<any number>.<any number>.<any number>:<any number>
Here we are validating just the format of the IP, not validating that IP should be
correct which will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.
Signed-off-by: Pawan <pawan@mayadata.io>
Now we have the same operator yaml which can work for all
OS distro. We don't need to have OS specific Operator yamls.
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.
In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.
Steps to use velero plugin for ZFS-LocalPV are :
1. install velero
2. add openebs plugin
velero plugin add openebs/velero-plugin:latest
3. Create the volumesnapshot location :
for full backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
for incremental backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
backup: incremental
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
4. Create backup
velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default
5. Create Schedule
velero create schedule newschedule --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default
6. Restore from backup
velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1
Signed-off-by: Pawan <pawan@mayadata.io>
* feat(zfspv): mounting the root filesystem to remove the dependency on the OS
We are mounting the individual library to run the zfs
binary inside the ZFS-LocalPV daemonset. The problem with this
is each OS has different sets of libraries. We need to have different
Operator yamls for different OS versions.
Here we are mounting the root directory inside the ZFS-LocalPV daemonset Pod
which does chroot to this path and run the command. As all the libraries will
be available which are present on the host inside the Pod, so we don't need to mount each
library here and also it will work for all the Operating systems.
To be on the safe side, we are mounting the host's root directory
as Readonly filesystem.
Signed-off-by: Pawan <pawan@mayadata.io>
* adding comment for namespace
Signed-off-by: Pawan <pawan@mayadata.io>
This field was added in Kubernetes 1.16 and it informs Kubernetes about
the volume modes that are supported by the driver. The default is
"Persistent" if it is not used.
This operator yaml will not work on k8s 1.14 and 1.15, since the driver supports
those k8s version so no need to mention volumeLifecycleModes in the operator as
the default is "Persistent".
Signed-off-by: Pawan <pawan@mayadata.io>
added snapshot and clone related test cases. Also restructure
the BDD framework to loop through the supported fstypes and perfrom all
the test cases we have.
Signed-off-by: Pawan <pawan@mayadata.io>
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
parameters:
shared: "yes"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```
Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.
Also fixed the log formatting in the GRPC log.
Signed-off-by: Pawan <pawan@mayadata.io>
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.
The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.
Signed-off-by: Pawan <pawan@mayadata.io>
adding grafana dashboard for ZFS Local PV that shows the following metrics:
- Volume Capacity (used space percentage)
- ARC Size, Hits, Misses
- L2ARC Size, Hits, Misses
- ZPOOL Read/Write IOs
- ZPOOL Read/Write time
This dashboard was inspired by https://grafana.com/grafana/dashboards/7845
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds the support for creating a Raw Block Volume request using volumemode as block in PVC :-
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: block-claim
spec:
volumeMode: Block
storageClassName: zfspv-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
The driver will create a zvol for this volume and bind mount the block device at the given path.
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds the support for use to specify custom labels to the kubernetes nodes and use them in the allowedToplogoies section of the StorageClass.
Few notes:
- This PR depends on the CSI driver's capability to support custom topology keys.
- label on the nodes should be added first and then deploy the driver to make it aware of
all the labels that node has. If labels are added after ZFS-LocalPV driver
has been deployed, a restart all the node csi driver agents is required so that the driver
can pick the labels and add them as supported topology keys.
- if storageclass is using Immediate binding mode and topology key is not mentioned
then all the nodes should be labeled using same key, that means:
- same key should be present on all nodes, nodes can have different values for those keys.
- If nodes are labeled with different keys i.e. some nodes are having different keys, then ZFSPV's default scheduler can not effictively do the volume count based scheduling. In this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.
Signed-off-by: Pawan <pawan@mayadata.io>
k8s is very slow in attaching the volumes when dealing with the
large number of volume attachment object.
(k8s issue https://github.com/kubernetes/kubernetes/issues/84169)
The volumeattachment is not required for ZFSPV, so avoid creation
of attachment object, also removed the csi-attacher container as
this is also not needed as it acts on volumeattachment object.
k8s is very slow in attaching the volumes when dealing with the
large number of volume attachment object :
k8s issue https://github.com/kubernetes/kubernetes/issues/84169).
Volumeattachment is a CR created just to tell the watcher of it
which is csi-attacher, that it has to call the Controller Publish/Unpublish grpc.
Which does all the tasks to attach the volumes to a node for example call to the
DigitalOcean Block Storage API service to attach a created volume to a specified node.
Since for ZFSPV, volume is already present locally, nothing needs to done in Controller
Publish/Unpublish, so it is good to remove them.
so avoiding creation of attachment object in this change, also removed the csi-attacher
container as this is also not needed as it acts on volumeattachment object.
Removed csi-cluster-driver-registrar container also as it is deprecated and not needed anymore.
We are using csidriver beta CRDs so minimum k8s version required is 1.14+.
Signed-off-by: Pawan <pawan@mayadata.io>
Validating few parameters for the ZFSVolume custom resource
- compression can be "on", "off", "lzjb", "gzip", "gzip-[1-9]", "zle" and "lz4"
- encryption can be "on", "off", "aes-128-ccm", "aes-192-ccm", "aes-256-ccm", "aes-128-gcm", "aes-192-gcm", and "aes-256-gcm"
- dedup can be "on" and "off"
- poolname can be string
- ownernodeid can be string
- thinprovision can be "yes" and "no"
- volumetype can be "DATASET" and "ZVOL"
Also added required fields needed to create ZFSVolume CR
- ownerNodeID
- poolname
- volumeType
- capacity
Signed-off-by: Pawan <pawan@mayadata.io>
- To generate the CRD spec `make manifest` generate then under
deploy/yamls directory
- added a update-crd script to automate the steps to generate
CRDs and its validation of each types
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
We can resize the volume by updating the PVC yaml to
the desired size and apply it. The ZFS Driver will take care
of updating the quota in case of dataset. If we are using a
Zvol and have mounted it as ext4 or xfs filesystem, the driver will take
care of expanding the volume via reize2fs/xfs_growfs binaries.
For resize, storageclass that provisions the pvc must suppo
rt resize. We should have allowVolumeExpansion as true in storageclass
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
allowVolumeExpansion: true
parameters:
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```
Signed-off-by: Pawan <pawan@mayadata.io>