Commit graph

214 commits

Author SHA1 Message Date
Pawan
30a7f2317e fix(kust): removing quay as we are using multiarch docker images
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-04 11:41:18 +05:30
Pawan
d3d4a2da23 chore(doc): update restore doc with node mapping details
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-02 12:21:47 +05:30
Pawan
d537bd3655 chore(refactor): move xfs and mount code out of zfs package
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-02 12:20:59 +05:30
Pawan
935a544538 chore(refactor): move btrfs code out of zfs package
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-25 01:14:48 +05:30
Pawan
3404bc032b chore(doc): adding v1.1.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-19 07:46:59 +05:30
Pawan
e83e051f83 fix(kust): rename the kustomize.yaml file to kustomization.yaml
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-18 17:53:36 +05:30
Akhil Mohan
b0eee6f26f fix(build): fix release tag env in buildscript
Signed-off-by: Akhil Mohan <akhil.mohan@mayadata.io>
2020-11-18 17:53:02 +05:30
Akhil Mohan
fc4121e5e9 chore(actions): replace deprecated methods in github actions
Signed-off-by: Akhil Mohan <akhil.mohan@mayadata.io>
2020-11-18 17:53:02 +05:30
Aman Gupta
919a058223
chore(yaml): changing the zfs-driver images to multi-arch docker hub images (#237)
Signed-off-by: Aman Gupta <aman.gupta@mayadata.io>
2020-11-14 12:44:38 +05:30
Pawan Prakash Sharma
fb6f1006da
feat(clone): add support for creating the Clone from volume as datasource (#234)
This PR adds the capability to create the Clone from pvc directly

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-clone
spec:
  storageClassName: openebs-snap
  dataSource:
    name: pvc-snap
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
```
The ZFS_LocalPV driver will create one internal snapshot of the name
same as the new volume name and will create a clone out of it. Also,
while destroying the volume the driver will take care of deleting
the created snapshot for the clone.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-11 18:58:25 +05:30
Prateek Pandey
e52d6c7067
feat(build): support for multi arch container image (#233)
* support for multi arch container image via github actions
* suffix amd64 arch tag in zfs driver image

Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2020-11-11 14:16:33 +05:30
Pawan
64bc7cb1c9 feat(upgrade): support parallel/faster upgrades for node daemonset
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.

Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-03 12:54:58 +05:30
Pawan Prakash Sharma
f386bfc4ce
feat(kustomize): adding deployment via kustomize (#231)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-31 10:06:22 +05:30
Pawan
00d3fc134e adding 1.0.0 release changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-15 20:56:03 +05:30
Pawan
f998bc5c5e chore(changelog): adding changelog for v1.0.1 release
Following PR added for changelog
- https://github.com/openebs/zfs-localpv/pull/211
- https://github.com/openebs/zfs-localpv/pull/221

Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-15 20:56:03 +05:30
Pawan
1851d4b4e0 chore(doc): updating the doc with the incremental backup details
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-14 09:58:56 +05:30
Gagandeep Singh
3da4f7308e
chore(refactor): Remove MountInfo struct from api (#225)
Signed-off-by: Gagandeep Singh <codegagan@gmail.com>
2020-10-12 10:59:23 +05:30
Naveenkhasyap
55a155c4a5
add go report card badge for ZFS-LocalPV (#223)
Signed-off-by: Naveenkhasyap <naveen.maltesh@gmail.com>
2020-10-01 14:20:58 +05:30
Pawan
26968b5394 feat(backup,restore): adding validation for backup and restore
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.

Validating the server address as :

^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$

which is :

<any number>.<any number>.<any number>.<any number>:<any number>

Here we are validating just the format of the IP, not validating that IP should be
correct which  will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-30 11:32:32 +05:30
Pawan Prakash Sharma
5ea411ad05
chore(doc): updating the doc with supported storageclass parameters (#212)
Updating the doc with supported storageclass parameters

Also updated the readme with operator yaml to install the latest release
instead of ci release. Also corrected few formatting in the doc.

Adding hackmd notes from community meetings. 

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-23 23:10:03 +05:30
Pawan
c9ea713333 chore(yaml): removing centos yamls from the repo
Now we have the same operator yaml which can work for all
OS distro. We don't need to have OS specific Operator yamls.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-16 21:09:10 +05:30
Pawan
b81f42a526 chore(changelog): adding change log for v1.0.0 release
Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-15 23:33:54 +05:30
Pawan
5d05468694 chore(doc): adding docs for backup and restore
Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-15 23:31:54 +05:30
ajeet_rai
27fe7e3b06
chore(check): Add license-check for .go , .sh , Dockerfile and Makefile (#205)
Signed-off-by: ajeetrai707 <ajeetrai707@gmail.com>
2020-09-08 20:37:59 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
Pawan Prakash Sharma
a5e645b43d
feat(zfspv): mounting the root filesystem to remove the dependency on the Operating system (#204)
* feat(zfspv): mounting the root filesystem to remove the dependency on the OS

We are mounting the individual library to run the zfs
binary inside the ZFS-LocalPV daemonset. The problem with this
is each OS has different sets of libraries. We need to have different
Operator yamls for different OS versions.

Here we are mounting the root directory inside the ZFS-LocalPV daemonset Pod
which does chroot to this path and run the command. As all the libraries will
be available which are present on the host inside the Pod, so we don't need to mount each
library here and also it will work for all the Operating systems.

To be on the safe side, we are mounting the host's root directory
as Readonly filesystem.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding comment for namespace

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-07 21:12:31 +05:30
Pawan
109fbced84 fix(build): update go version to 1.14.7
CVE-2020-16845 has been reported for go versions earlier to 1.14.7,
this PR upgrades the go version for travis builds.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-27 23:12:40 +05:30
Pawan
9ef7d35d81 fix(upgrade): Reverting back to old way of checking the volume status
few customers are using old version of the driver where
Status field is not present. So mount will fail after the
upgrade to the 0.9 or later version.

Reverting back to the checking if finalizer is set to check if
volume is ready to be mounted.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-26 22:11:00 +05:30
Pawan
b0bb8aa059 chore(doc): adding changelog for 0.9.1 release
Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-15 11:01:02 +05:30
Pawan
22d2a0d4cc chore(doc): adding volume capacity roundoff details in readme
Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-10 11:26:54 +05:30
Pawan Prakash Sharma
b1b69ebfe7
fix(zfspv): rounding off the volume size to Gi and Mi (#191)
ZFS does not create the zvol if volume size is not multiple of
the volblocksize. There are use cases where customer will create
a PVC with size as 5G, which will be 5 * 1000 * 1000 * 1000 bytes
and this is not the multiple of default volblocksize 8k.

In ZFS, volblocksize and recordsize must be power of 2 from 512B to 1M,
so keeping the size in the form of Gi or Mi should be
sufficient to make volsize multiple of volblocksize/recordsize.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-07 20:50:13 +05:30
Waqar Ahmed
8d3705b08b
chore(doc): Update Readme to specify usage of child dataset for poolname (#190)
As discussed in https://github.com/openebs/zfs-localpv/issues/189, it is possible to use a child dataset instead of the root dataset and this change reflects the usage in readme for users.

Signed-off-by: Waqar Ahmed <waqarahmedjoyia@live.com>
2020-07-30 20:11:15 +05:30
Pawan
cd4a0de932 chore(doc) adding minimum zfs drive version
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-27 00:07:25 +05:30
Pawan
5fd1735c68 chore(doc): adding supported features doc for each k8s version
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-27 00:07:25 +05:30
Pawan
14f237db79 fix(yaml): removing volumeLifecycleModes from the operator yaml
This field was added in Kubernetes 1.16 and it informs Kubernetes about
the volume modes that are supported by the driver. The default is
"Persistent" if it is not used.

This operator yaml will not work on k8s 1.14 and 1.15, since the driver supports
those k8s version so no need to mention volumeLifecycleModes in the operator as
the default is "Persistent".

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:44 +05:30
Pawan
ea552beb1f fix(xfs, uuid): fixed uuid generation issue when mount fails
This issue is specific to xfs only, when we create a clone volume and system is taking time in creating the device.

When we create a clone volume from a xfs filesystem, ZFS-LocalPV will go ahead and generate a new UUID for the clone volumes as we need a new UUID to mount the new clone filesystem. To generate a new UUID for the clone volume, ZFS-LocalPV first replays the xfs log by mounting the device to a tmp localtion.

Here, what is happening is since device creation is slow, so we went ahead and created the tmp location to mount the clone volume but since device has not created yet, the mount failed. In the next try since the tmp location is present, it will keep failing there only at every reconciliation time.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:16 +05:30
Pawan
e00a6b9ae2 fix(zfspv): mounting the volume if it is ready
Instead of checking for the finalizer, checking for the
volume state to be ready is more intuitive before mounting it.

Also removed duplicate if statement for btrfs which was added while resolveing
the merge conflict in https://github.com/openebs/zfs-localpv/pull/175.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 21:40:00 +05:30
Pawan
b6b4f0bb52 chore(changelog): adding v0.9.0 changelog in the repo
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-15 14:57:01 +05:30
Aman Gupta
f4ccefa7bd chore(readme): updating the readme for application-consistent snapshot
Signed-off-by: Aman Gupta <aman.gupta@mayadata.io>
2020-07-09 20:47:33 +05:30
Pawan
39d2ee2859 fix(zfspv): fixing mounting issue with xfs
We changed the ubuntu docker image to 20.04 in https://github.com/openebs/zfs-localpv/pull/170,
which has issues with formatting the zvol as xfs file system. The filesystem
formatted with xfs is not able to mount with error : "missing codepage or helper program, or other error".

Reverting back to ubuntu 19.10 to fix this issue.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-09 19:10:36 +05:30
Pawan
a4bdaec3f2 chore(doc): adding btrfs filesystem in the doc
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-08 17:25:08 +05:30
Pawan
21045a5b1f feat(bdd): adding snapshot and clone releated test cases
added snapshot and clone related test cases. Also restructure
the BDD framework to loop through the supported fstypes and perfrom all
the test cases we have.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-07 23:21:20 +05:30
vaniisgh
8bbf3d7d2f
feat(zfspv) Add golint check to travis (#175)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-07 18:21:02 +05:30
Pawan
8b7ad5cb45 fix(btrfs): fixing duplicate UUID issue with btrfs
btrfs, like xfs, needs to generate a new UUID for the
cloned volumes. All the devices with the same UUID will be treated
same for btrfs, so here generating the new UUID for the cloned volumes
using btrfstune command.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-03 21:04:51 +05:30
vaniisgh
a19877e4c0
feat(zfspv): check pod-status in BDD test (#171)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-02 14:07:43 +05:30
Pawan
051f26fe16 feat(btrfs): adding support to have btrfs filesystem for ZFS-LocalPV
Now, applications can use the btrfs file system by mentioning "btrfs"
as fstype in the storageclass :-

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  fstype: "btrfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:59 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
vaniisgh
ac9d6d5729
feat(zfspv) add go lint target (#167)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-30 13:26:12 +05:30
vaniisgh
d0d1664d43
feat(zfspv): move to klog (#166)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-29 12:18:33 +05:30
vaniisgh
54f2b0b9fd
chore(doc): update docs for GO module support (#160)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>>
2020-06-26 17:11:31 +05:30