Commit graph

26 commits

Author SHA1 Message Date
Pawan
14f237db79 fix(yaml): removing volumeLifecycleModes from the operator yaml
This field was added in Kubernetes 1.16 and it informs Kubernetes about
the volume modes that are supported by the driver. The default is
"Persistent" if it is not used.

This operator yaml will not work on k8s 1.14 and 1.15, since the driver supports
those k8s version so no need to mention volumeLifecycleModes in the operator as
the default is "Persistent".

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:44 +05:30
Pawan
ea552beb1f fix(xfs, uuid): fixed uuid generation issue when mount fails
This issue is specific to xfs only, when we create a clone volume and system is taking time in creating the device.

When we create a clone volume from a xfs filesystem, ZFS-LocalPV will go ahead and generate a new UUID for the clone volumes as we need a new UUID to mount the new clone filesystem. To generate a new UUID for the clone volume, ZFS-LocalPV first replays the xfs log by mounting the device to a tmp localtion.

Here, what is happening is since device creation is slow, so we went ahead and created the tmp location to mount the clone volume but since device has not created yet, the mount failed. In the next try since the tmp location is present, it will keep failing there only at every reconciliation time.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:16 +05:30
Pawan
e00a6b9ae2 fix(zfspv): mounting the volume if it is ready
Instead of checking for the finalizer, checking for the
volume state to be ready is more intuitive before mounting it.

Also removed duplicate if statement for btrfs which was added while resolveing
the merge conflict in https://github.com/openebs/zfs-localpv/pull/175.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 21:40:00 +05:30
Pawan
b6b4f0bb52 chore(changelog): adding v0.9.0 changelog in the repo
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-15 14:57:01 +05:30
Pawan
39d2ee2859 fix(zfspv): fixing mounting issue with xfs
We changed the ubuntu docker image to 20.04 in https://github.com/openebs/zfs-localpv/pull/170,
which has issues with formatting the zvol as xfs file system. The filesystem
formatted with xfs is not able to mount with error : "missing codepage or helper program, or other error".

Reverting back to ubuntu 19.10 to fix this issue.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-09 19:10:36 +05:30
Pawan
21045a5b1f feat(bdd): adding snapshot and clone releated test cases
added snapshot and clone related test cases. Also restructure
the BDD framework to loop through the supported fstypes and perfrom all
the test cases we have.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-07 23:21:20 +05:30
vaniisgh
8bbf3d7d2f
feat(zfspv) Add golint check to travis (#175)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-07 18:21:02 +05:30
Pawan
8b7ad5cb45 fix(btrfs): fixing duplicate UUID issue with btrfs
btrfs, like xfs, needs to generate a new UUID for the
cloned volumes. All the devices with the same UUID will be treated
same for btrfs, so here generating the new UUID for the cloned volumes
using btrfstune command.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-03 21:04:51 +05:30
vaniisgh
a19877e4c0
feat(zfspv): check pod-status in BDD test (#171)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-02 14:07:43 +05:30
Pawan
051f26fe16 feat(btrfs): adding support to have btrfs filesystem for ZFS-LocalPV
Now, applications can use the btrfs file system by mentioning "btrfs"
as fstype in the storageclass :-

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  fstype: "btrfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:59 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
vaniisgh
ac9d6d5729
feat(zfspv) add go lint target (#167)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-30 13:26:12 +05:30
vaniisgh
d0d1664d43
feat(zfspv): move to klog (#166)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-29 12:18:33 +05:30
vaniisgh
54f2b0b9fd
chore(doc): update docs for GO module support (#160)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>>
2020-06-26 17:11:31 +05:30
vaniisgh
13ec77c75e
feat(zfspv): filter grpc logs to reduce the pollution (#161)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-24 21:41:15 +05:30
Pawan
8968605602 chore(changelog): adding v0.8.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-15 15:04:35 +05:30
Pawan
91e232a840 adding missing changelog from the contributer
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-11 18:41:25 +05:30
Pawan
639ead416e feat(mount): moving to legacy mount
We can not mount the datasets to more than one path via zfs mount command,
shifting to the legacy way of handling ZFS volumes where we can mount/umount
the datasets via legacy mount and umount commands.

This will also add a building block for SINGLE-NODE-MULTI-WRITER Capability.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-09 14:41:53 +05:30
Pawan
b08a1e2a1f feat(usage): include pvc name in volume events
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 13:05:23 +05:30
Pawan
e558bb52cb feat(centos): adding operator yaml for centos7 and centos8
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 10:35:13 +05:30
Pawan
45015bf063 fix(pvc): fixing stale ZFSVolume CR issue when deleting pending PVC
PVC will not bound if there are wrong parameters/poolname in the storageclass,
the ZFSVolume CR will be still created and will remain in Pending State,
deletion of the PVC will delete PVC and since PVC is not bound, ZFS-LocalPV
driver will not get the delete call and will leave the ZFSVolume CR hanging there.
Reverting the behavior introduced in https://github.com/openebs/zfs-localpv/pull/121,
Now PVC will be bound but still ZFSVolume will be in Pending state until the volume is created.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 10:31:39 +05:30
Pawan Prakash Sharma
0e2223985e
chore(changelog): add missing changelog for v0.8 release (#146)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-05 18:01:43 +05:30
Christopher J. Ruwe
377b881653 make character case for keys in parameters map irrelevant, fixing #143
More specifically,
- introduce helper function to get maps with all keys set to lowercase,
- introduce lookup helper based on such maps and
- change lookups for CreateVolumeRequest()s and CreateVolume()s so that
  parameter keys are processed as lowercase irrespective of actual
  spelling.

Signed-off-by: Christopher J. Ruwe <cjr@cruwe.de>
2020-06-04 19:25:05 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Pawan
bd86d4cd48 chore(doc): adding 0.7.0 and 0.6.1 changelog
Also updated readme with the link to configure custom topology keys.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-15 19:47:42 +05:30