Commit graph

44 commits

Author SHA1 Message Date
Pawan
48e6a19d7c chore(doc): adding 1.2.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-16 01:41:17 +05:30
Pawan
b42893ce47 fix(mount): fixing idempotency check for the mount path
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-15 14:01:30 +05:30
Pawan Prakash Sharma
a73a59fd49
feat(sanity): adding CSI Sanity test (#232)
* adding CSI Sanity test for ZFS-LocalPV
* make lowercase at all the places

Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-10 11:53:16 +05:30
Pawan
3404bc032b chore(doc): adding v1.1.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-19 07:46:59 +05:30
Pawan Prakash Sharma
fb6f1006da
feat(clone): add support for creating the Clone from volume as datasource (#234)
This PR adds the capability to create the Clone from pvc directly

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-clone
spec:
  storageClassName: openebs-snap
  dataSource:
    name: pvc-snap
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
```
The ZFS_LocalPV driver will create one internal snapshot of the name
same as the new volume name and will create a clone out of it. Also,
while destroying the volume the driver will take care of deleting
the created snapshot for the clone.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-11 18:58:25 +05:30
Pawan
64bc7cb1c9 feat(upgrade): support parallel/faster upgrades for node daemonset
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.

Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-03 12:54:58 +05:30
Pawan Prakash Sharma
f386bfc4ce
feat(kustomize): adding deployment via kustomize (#231)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-31 10:06:22 +05:30
Pawan
f998bc5c5e chore(changelog): adding changelog for v1.0.1 release
Following PR added for changelog
- https://github.com/openebs/zfs-localpv/pull/211
- https://github.com/openebs/zfs-localpv/pull/221

Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-15 20:56:03 +05:30
Gagandeep Singh
3da4f7308e
chore(refactor): Remove MountInfo struct from api (#225)
Signed-off-by: Gagandeep Singh <codegagan@gmail.com>
2020-10-12 10:59:23 +05:30
Pawan
26968b5394 feat(backup,restore): adding validation for backup and restore
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.

Validating the server address as :

^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$

which is :

<any number>.<any number>.<any number>.<any number>:<any number>

Here we are validating just the format of the IP, not validating that IP should be
correct which  will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-30 11:32:32 +05:30
Pawan
c9ea713333 chore(yaml): removing centos yamls from the repo
Now we have the same operator yaml which can work for all
OS distro. We don't need to have OS specific Operator yamls.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-16 21:09:10 +05:30
Pawan
b81f42a526 chore(changelog): adding change log for v1.0.0 release
Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-15 23:33:54 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
Pawan Prakash Sharma
a5e645b43d
feat(zfspv): mounting the root filesystem to remove the dependency on the Operating system (#204)
* feat(zfspv): mounting the root filesystem to remove the dependency on the OS

We are mounting the individual library to run the zfs
binary inside the ZFS-LocalPV daemonset. The problem with this
is each OS has different sets of libraries. We need to have different
Operator yamls for different OS versions.

Here we are mounting the root directory inside the ZFS-LocalPV daemonset Pod
which does chroot to this path and run the command. As all the libraries will
be available which are present on the host inside the Pod, so we don't need to mount each
library here and also it will work for all the Operating systems.

To be on the safe side, we are mounting the host's root directory
as Readonly filesystem.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding comment for namespace

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-07 21:12:31 +05:30
Pawan
109fbced84 fix(build): update go version to 1.14.7
CVE-2020-16845 has been reported for go versions earlier to 1.14.7,
this PR upgrades the go version for travis builds.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-27 23:12:40 +05:30
Pawan
9ef7d35d81 fix(upgrade): Reverting back to old way of checking the volume status
few customers are using old version of the driver where
Status field is not present. So mount will fail after the
upgrade to the 0.9 or later version.

Reverting back to the checking if finalizer is set to check if
volume is ready to be mounted.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-26 22:11:00 +05:30
Pawan
b0bb8aa059 chore(doc): adding changelog for 0.9.1 release
Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-15 11:01:02 +05:30
Pawan Prakash Sharma
b1b69ebfe7
fix(zfspv): rounding off the volume size to Gi and Mi (#191)
ZFS does not create the zvol if volume size is not multiple of
the volblocksize. There are use cases where customer will create
a PVC with size as 5G, which will be 5 * 1000 * 1000 * 1000 bytes
and this is not the multiple of default volblocksize 8k.

In ZFS, volblocksize and recordsize must be power of 2 from 512B to 1M,
so keeping the size in the form of Gi or Mi should be
sufficient to make volsize multiple of volblocksize/recordsize.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-07 20:50:13 +05:30
Pawan
14f237db79 fix(yaml): removing volumeLifecycleModes from the operator yaml
This field was added in Kubernetes 1.16 and it informs Kubernetes about
the volume modes that are supported by the driver. The default is
"Persistent" if it is not used.

This operator yaml will not work on k8s 1.14 and 1.15, since the driver supports
those k8s version so no need to mention volumeLifecycleModes in the operator as
the default is "Persistent".

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:44 +05:30
Pawan
ea552beb1f fix(xfs, uuid): fixed uuid generation issue when mount fails
This issue is specific to xfs only, when we create a clone volume and system is taking time in creating the device.

When we create a clone volume from a xfs filesystem, ZFS-LocalPV will go ahead and generate a new UUID for the clone volumes as we need a new UUID to mount the new clone filesystem. To generate a new UUID for the clone volume, ZFS-LocalPV first replays the xfs log by mounting the device to a tmp localtion.

Here, what is happening is since device creation is slow, so we went ahead and created the tmp location to mount the clone volume but since device has not created yet, the mount failed. In the next try since the tmp location is present, it will keep failing there only at every reconciliation time.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:16 +05:30
Pawan
e00a6b9ae2 fix(zfspv): mounting the volume if it is ready
Instead of checking for the finalizer, checking for the
volume state to be ready is more intuitive before mounting it.

Also removed duplicate if statement for btrfs which was added while resolveing
the merge conflict in https://github.com/openebs/zfs-localpv/pull/175.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 21:40:00 +05:30
Pawan
b6b4f0bb52 chore(changelog): adding v0.9.0 changelog in the repo
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-15 14:57:01 +05:30
Pawan
39d2ee2859 fix(zfspv): fixing mounting issue with xfs
We changed the ubuntu docker image to 20.04 in https://github.com/openebs/zfs-localpv/pull/170,
which has issues with formatting the zvol as xfs file system. The filesystem
formatted with xfs is not able to mount with error : "missing codepage or helper program, or other error".

Reverting back to ubuntu 19.10 to fix this issue.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-09 19:10:36 +05:30
Pawan
21045a5b1f feat(bdd): adding snapshot and clone releated test cases
added snapshot and clone related test cases. Also restructure
the BDD framework to loop through the supported fstypes and perfrom all
the test cases we have.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-07 23:21:20 +05:30
vaniisgh
8bbf3d7d2f
feat(zfspv) Add golint check to travis (#175)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-07 18:21:02 +05:30
Pawan
8b7ad5cb45 fix(btrfs): fixing duplicate UUID issue with btrfs
btrfs, like xfs, needs to generate a new UUID for the
cloned volumes. All the devices with the same UUID will be treated
same for btrfs, so here generating the new UUID for the cloned volumes
using btrfstune command.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-03 21:04:51 +05:30
vaniisgh
a19877e4c0
feat(zfspv): check pod-status in BDD test (#171)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-02 14:07:43 +05:30
Pawan
051f26fe16 feat(btrfs): adding support to have btrfs filesystem for ZFS-LocalPV
Now, applications can use the btrfs file system by mentioning "btrfs"
as fstype in the storageclass :-

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  fstype: "btrfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:59 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
vaniisgh
ac9d6d5729
feat(zfspv) add go lint target (#167)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-30 13:26:12 +05:30
vaniisgh
d0d1664d43
feat(zfspv): move to klog (#166)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-29 12:18:33 +05:30
vaniisgh
54f2b0b9fd
chore(doc): update docs for GO module support (#160)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>>
2020-06-26 17:11:31 +05:30
vaniisgh
13ec77c75e
feat(zfspv): filter grpc logs to reduce the pollution (#161)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-24 21:41:15 +05:30
Pawan
8968605602 chore(changelog): adding v0.8.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-15 15:04:35 +05:30
Pawan
91e232a840 adding missing changelog from the contributer
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-11 18:41:25 +05:30
Pawan
639ead416e feat(mount): moving to legacy mount
We can not mount the datasets to more than one path via zfs mount command,
shifting to the legacy way of handling ZFS volumes where we can mount/umount
the datasets via legacy mount and umount commands.

This will also add a building block for SINGLE-NODE-MULTI-WRITER Capability.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-09 14:41:53 +05:30
Pawan
b08a1e2a1f feat(usage): include pvc name in volume events
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 13:05:23 +05:30
Pawan
e558bb52cb feat(centos): adding operator yaml for centos7 and centos8
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 10:35:13 +05:30
Pawan
45015bf063 fix(pvc): fixing stale ZFSVolume CR issue when deleting pending PVC
PVC will not bound if there are wrong parameters/poolname in the storageclass,
the ZFSVolume CR will be still created and will remain in Pending State,
deletion of the PVC will delete PVC and since PVC is not bound, ZFS-LocalPV
driver will not get the delete call and will leave the ZFSVolume CR hanging there.
Reverting the behavior introduced in https://github.com/openebs/zfs-localpv/pull/121,
Now PVC will be bound but still ZFSVolume will be in Pending state until the volume is created.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 10:31:39 +05:30
Pawan Prakash Sharma
0e2223985e
chore(changelog): add missing changelog for v0.8 release (#146)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-05 18:01:43 +05:30
Christopher J. Ruwe
377b881653 make character case for keys in parameters map irrelevant, fixing #143
More specifically,
- introduce helper function to get maps with all keys set to lowercase,
- introduce lookup helper based on such maps and
- change lookups for CreateVolumeRequest()s and CreateVolume()s so that
  parameter keys are processed as lowercase irrespective of actual
  spelling.

Signed-off-by: Christopher J. Ruwe <cjr@cruwe.de>
2020-06-04 19:25:05 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Pawan
bd86d4cd48 chore(doc): adding 0.7.0 and 0.6.1 changelog
Also updated readme with the link to configure custom topology keys.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-15 19:47:42 +05:30