Instead of checking for the finalizer, checking for the
volume state to be ready is more intuitive before mounting it.
Also removed duplicate if statement for btrfs which was added while resolveing
the merge conflict in https://github.com/openebs/zfs-localpv/pull/175.
Signed-off-by: Pawan <pawan@mayadata.io>
btrfs, like xfs, needs to generate a new UUID for the
cloned volumes. All the devices with the same UUID will be treated
same for btrfs, so here generating the new UUID for the cloned volumes
using btrfstune command.
Signed-off-by: Pawan <pawan@mayadata.io>
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
parameters:
shared: "yes"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```
Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.
Also fixed the log formatting in the GRPC log.
Signed-off-by: Pawan <pawan@mayadata.io>
We can not mount the datasets to more than one path via zfs mount command,
shifting to the legacy way of handling ZFS volumes where we can mount/umount
the datasets via legacy mount and umount commands.
This will also add a building block for SINGLE-NODE-MULTI-WRITER Capability.
Signed-off-by: Pawan <pawan@mayadata.io>
PVC will not bound if there are wrong parameters/poolname in the storageclass,
the ZFSVolume CR will be still created and will remain in Pending State,
deletion of the PVC will delete PVC and since PVC is not bound, ZFS-LocalPV
driver will not get the delete call and will leave the ZFSVolume CR hanging there.
Reverting the behavior introduced in https://github.com/openebs/zfs-localpv/pull/121,
Now PVC will be bound but still ZFSVolume will be in Pending state until the volume is created.
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds the support for creating a Raw Block Volume request using volumemode as block in PVC :-
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: block-claim
spec:
volumeMode: Block
storageClassName: zfspv-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
The driver will create a zvol for this volume and bind mount the block device at the given path.
Signed-off-by: Pawan <pawan@mayadata.io>
looks like a bug in ZFS as when you change the mountpoint property to none,
ZFS automatically umounts the file system. When we delete the pod, we get the
unmount request for the old pod and mount request for the new pod. Unmount
is done by the driver by setting mountpoint to none and the driver assumes that
unmount has done and proceeded to delete the mountpath, but here zfs has not unmounted
the dataset
```
$ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mounted yes -
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mountpoint none local
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 canmount on
```
here, the driver will assume that dataset has been unmouted and proceed to delete the
mountpath and it will delete the data as part of cleaning up for the NodeUnPublish request.
Shifting to use zfs umount instead of doing zfs set mountpoint=none for umounting the dataset.
Also the driver is using os.RemoveAll which is very risky as it will clean
child also, since the mountpoint is not supposed to have anything,
just os.Remove is sufficient and it will fail if there is anything there.
Signed-off-by: Pawan <pawan@mayadata.io>
There can be cases where openebs namespace has been accidently deleted (Optoro case: https://mdap.zendesk.com/agent/tickets/963), There the driver attempted to destroy the dataset which will first umount the dataset and then try to destroy it, the destroy will fail as volume is busy. Here, as mentioned in the steps to recover, we have to manually mount the dataset
```
6. The driver might have attempted to destroy the volume before going down, which sets the mount as no(this strange behavior on gke ubuntu 18.04), we have to mount the dataset, go to the each node and check if there is any unmounted volume
zfs get mounted
if there is any unmounted dataset with this option as "no", we should do the below :-
mountpath=zfs get -Hp -o value mountpoint <dataset name>
zfs set mountpoint=none
zfs set mountpoint=<mountpath>
this will set the dataset to be mounted.
```
So in this case the volume will be unmounted and still mountpoint will set to the mountpath, so if application pod is deleted later on, it will try to mount the zfs dataset, here just setting the `mountpoint` is not sufficient, as if we have unmounted the zfs dataset (via zfs destroy in this case), so we have to explicitely mount the dataset **otherwise application will start running without any persistence storage**. Here automating the manual steps performed to resolve the problem, we are checking in the code that if zfs dataset is not mounted after setting the mountpoint property, attempt to mount it.
This is not the case with the zvol as it does not attempt to unmount it, so zvols are fine.
Also NodeUnPublish operation MUST be idempotent. If this RPC failed, or the CO does not know if it failed or not, it can choose to call NudeUnPublishRequest again. So handled this and returned successful if volume is not mounted also added descriptive error messages at few places.
Signed-off-by: Pawan <pawan@mayadata.io>
Application can now create a storageclass to create zfs filesystem
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv5
allowVolumeExpansion: true
parameters:
blocksize: "4k"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
ZFSPV was supporting ext2/3/4 and xfs filesystem only which
adds one extra filesystem layer on top of ZFS filesystem. So now
we can driectly write to the ZFS filesystem and get the optimal performance
by directly creating ZFS filesystem for storage.
Signed-off-by: Pawan <pawan@mayadata.io>
This PR adds support to allow the CSI driver to pick up a node matching the topology specified in the storage class. Admin can specify allowedTopologies in the StorageClass to specify the nodes where the zfs pools are setup
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
allowVolumeExpansion: true
parameters:
blocksize: "4k"
compression: "on"
dedup: "on"
thinprovision: "yes"
poolname: "zfspv-pool"
provisioner: zfs-localpv
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- gke-zfspv-pawan-default-pool-c8929518-cgd4
- gke-zfspv-pawan-default-pool-c8929518-dxzc
```
Note: This PR picks up the first node from the list of nodes available.
Signed-off-by: Pawan <pawan@mayadata.io>
provisioning and deprovisioning of
the volumes on the node where zfs pool
has already been setup. Pool name and the volume
parameters has to be given in storage class
which will be used to provision the volume.
Signed-off-by: Pawan <pawan@mayadata.io>