Commit graph

61 commits

Author SHA1 Message Date
Jesse Nelson
347d92a16f add support for providing additional volumes and adding init containers
Signed-off-by: Jesse Nelson <jesse@swirldslabs.com>
Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
2023-07-26 09:41:13 +00:00
Niladri Halder
0b02e1b0af
chore(changelog): update changelogs and CHANGELOG.md (#446)
Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
2023-05-29 10:41:19 +05:30
Joel Low
ba0e1749ec
perf(zfs): optimise pool listing for pools with many datasets (#440)
Restricting the `zfs list` command to depth 1 saves a lot of time for
pools with many datasets/zvols.

In my case, before:
```
$ time zfs list -s name -o name,guid,available -H -p >/dev/null
real    0m3.853s
user    0m0.171s
sys     0m3.539s
```

After:
```
$ time zfs list -d 1 -s name -o name,guid,available -H -p >/dev/null
real    0m0.027s
user    0m0.002s
sys     0m0.026s
```

Signed-off-by: Joel Low <joel@joelsplace.sg>
2023-05-29 09:17:28 +05:30
Fábián Tamás László
37a5cb80e2
fix (localpv): fixing CSIStorageCapacity when "poolname" param has child dataset (#393)
Signed-off-by: Fábián Tamás László <giganetom@gmail.com>
2021-10-05 20:24:11 +05:30
Pawan
0e6a02ea74 fix(topo): support old topology key for backward compatibility
Signed-off-by: Pawan <pawan@mayadata.io>
2021-05-04 13:27:51 +05:30
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Pawan Prakash Sharma
1dd25a61a2
chore(changelog): adding 1.6.0 changelog (#311)
Signed-off-by: pawan <pawanprakash101@gmail.com>
2021-04-16 12:55:44 +05:30
Pawan
04f7635b6f feat(provision): try volume creation on all the nodes
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.

The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-02 20:36:37 +05:30
Prateek Pandey
b1aa6ab51a
refact(deps): bump k8s and client-go deps to version v0.20.2 (#294)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-03-31 16:43:42 +05:30
Shubham Bajpai
533e17a9aa
chore(k8s): updated storage and apiextension version to v1 (#299)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-03-31 15:09:48 +05:30
Pawan Prakash Sharma
b797e04d92
chore(changelog): adding v1.5.0 changelog (#297)
Signed-off-by: Pawan <pawan@mayadata.io>
2021-03-15 22:17:32 +05:30
Shubham Bajpai
6c6d593437
chore(actions): move bdd tests to github actions (#293)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-03-05 20:08:29 +05:30
Pawan Prakash Sharma
6ec49df225
fix(restore): adding support to restore in an encrypted pool (#292)
Encrypted pool does not allow the volume to be pre created for the
restore purpose. Here changing the design to do the restore first
and then create the ZFSVolume object which will bind the volume
already created while doing restore.


Signed-off-by: Pawan <pawan@mayadata.io>
2021-03-01 23:56:42 +05:30
Pawan
77e722989c chore(changelog): adding 1.4.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2021-02-16 00:01:18 +05:30
Pawan
88ad25ec9c feat(resize): adding resize support for raw block volumes
Signed-off-by: Pawan <pawan@mayadata.io>
2021-02-02 12:44:02 +05:30
Pawan
b64db082be chore(changelog): adding v1.3.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2021-01-14 20:47:37 +05:30
Pawan
90ecfe9c73 feat(schd): adding capacity weighted scheduler
The ZFS Driver will use capacity scheduler to pick a node
which has less capacity occupied by the volumes. Making this
as default scheduler as it is better than the volume count based
scheduling. We can use below storageclass to specify the scheduler
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: openebs-zfspv
allowVolumeExpansion: true
parameters:
 scheduler: "CapacityWeighted"
 poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Please Note that after the upgrade, there will be a change in the behavior.
If we are not using `scheduler` parameter in the storage class then after
the upgrade ZFS Driver will pick the node bases on volume capacity weight
instead of the count.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-01-07 10:38:44 +05:30
Pawan
48e6a19d7c chore(doc): adding 1.2.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-16 01:41:17 +05:30
Pawan
b42893ce47 fix(mount): fixing idempotency check for the mount path
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-15 14:01:30 +05:30
Pawan Prakash Sharma
a73a59fd49
feat(sanity): adding CSI Sanity test (#232)
* adding CSI Sanity test for ZFS-LocalPV
* make lowercase at all the places

Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-10 11:53:16 +05:30
Pawan
3404bc032b chore(doc): adding v1.1.0 changelog
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-19 07:46:59 +05:30
Pawan Prakash Sharma
fb6f1006da
feat(clone): add support for creating the Clone from volume as datasource (#234)
This PR adds the capability to create the Clone from pvc directly

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-clone
spec:
  storageClassName: openebs-snap
  dataSource:
    name: pvc-snap
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
```
The ZFS_LocalPV driver will create one internal snapshot of the name
same as the new volume name and will create a clone out of it. Also,
while destroying the volume the driver will take care of deleting
the created snapshot for the clone.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-11 18:58:25 +05:30
Pawan
64bc7cb1c9 feat(upgrade): support parallel/faster upgrades for node daemonset
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.

Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-03 12:54:58 +05:30
Pawan Prakash Sharma
f386bfc4ce
feat(kustomize): adding deployment via kustomize (#231)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-31 10:06:22 +05:30
Pawan
f998bc5c5e chore(changelog): adding changelog for v1.0.1 release
Following PR added for changelog
- https://github.com/openebs/zfs-localpv/pull/211
- https://github.com/openebs/zfs-localpv/pull/221

Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-15 20:56:03 +05:30
Gagandeep Singh
3da4f7308e
chore(refactor): Remove MountInfo struct from api (#225)
Signed-off-by: Gagandeep Singh <codegagan@gmail.com>
2020-10-12 10:59:23 +05:30
Pawan
26968b5394 feat(backup,restore): adding validation for backup and restore
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.

Validating the server address as :

^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$

which is :

<any number>.<any number>.<any number>.<any number>:<any number>

Here we are validating just the format of the IP, not validating that IP should be
correct which  will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-30 11:32:32 +05:30
Pawan
c9ea713333 chore(yaml): removing centos yamls from the repo
Now we have the same operator yaml which can work for all
OS distro. We don't need to have OS specific Operator yamls.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-16 21:09:10 +05:30
Pawan
b81f42a526 chore(changelog): adding change log for v1.0.0 release
Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-15 23:33:54 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
Pawan Prakash Sharma
a5e645b43d
feat(zfspv): mounting the root filesystem to remove the dependency on the Operating system (#204)
* feat(zfspv): mounting the root filesystem to remove the dependency on the OS

We are mounting the individual library to run the zfs
binary inside the ZFS-LocalPV daemonset. The problem with this
is each OS has different sets of libraries. We need to have different
Operator yamls for different OS versions.

Here we are mounting the root directory inside the ZFS-LocalPV daemonset Pod
which does chroot to this path and run the command. As all the libraries will
be available which are present on the host inside the Pod, so we don't need to mount each
library here and also it will work for all the Operating systems.

To be on the safe side, we are mounting the host's root directory
as Readonly filesystem.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding comment for namespace

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-07 21:12:31 +05:30
Pawan
109fbced84 fix(build): update go version to 1.14.7
CVE-2020-16845 has been reported for go versions earlier to 1.14.7,
this PR upgrades the go version for travis builds.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-27 23:12:40 +05:30
Pawan
9ef7d35d81 fix(upgrade): Reverting back to old way of checking the volume status
few customers are using old version of the driver where
Status field is not present. So mount will fail after the
upgrade to the 0.9 or later version.

Reverting back to the checking if finalizer is set to check if
volume is ready to be mounted.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-26 22:11:00 +05:30
Pawan
b0bb8aa059 chore(doc): adding changelog for 0.9.1 release
Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-15 11:01:02 +05:30
Pawan Prakash Sharma
b1b69ebfe7
fix(zfspv): rounding off the volume size to Gi and Mi (#191)
ZFS does not create the zvol if volume size is not multiple of
the volblocksize. There are use cases where customer will create
a PVC with size as 5G, which will be 5 * 1000 * 1000 * 1000 bytes
and this is not the multiple of default volblocksize 8k.

In ZFS, volblocksize and recordsize must be power of 2 from 512B to 1M,
so keeping the size in the form of Gi or Mi should be
sufficient to make volsize multiple of volblocksize/recordsize.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-07 20:50:13 +05:30
Pawan
14f237db79 fix(yaml): removing volumeLifecycleModes from the operator yaml
This field was added in Kubernetes 1.16 and it informs Kubernetes about
the volume modes that are supported by the driver. The default is
"Persistent" if it is not used.

This operator yaml will not work on k8s 1.14 and 1.15, since the driver supports
those k8s version so no need to mention volumeLifecycleModes in the operator as
the default is "Persistent".

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:44 +05:30
Pawan
ea552beb1f fix(xfs, uuid): fixed uuid generation issue when mount fails
This issue is specific to xfs only, when we create a clone volume and system is taking time in creating the device.

When we create a clone volume from a xfs filesystem, ZFS-LocalPV will go ahead and generate a new UUID for the clone volumes as we need a new UUID to mount the new clone filesystem. To generate a new UUID for the clone volume, ZFS-LocalPV first replays the xfs log by mounting the device to a tmp localtion.

Here, what is happening is since device creation is slow, so we went ahead and created the tmp location to mount the clone volume but since device has not created yet, the mount failed. In the next try since the tmp location is present, it will keep failing there only at every reconciliation time.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:16 +05:30
Pawan
e00a6b9ae2 fix(zfspv): mounting the volume if it is ready
Instead of checking for the finalizer, checking for the
volume state to be ready is more intuitive before mounting it.

Also removed duplicate if statement for btrfs which was added while resolveing
the merge conflict in https://github.com/openebs/zfs-localpv/pull/175.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 21:40:00 +05:30
Pawan
b6b4f0bb52 chore(changelog): adding v0.9.0 changelog in the repo
Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-15 14:57:01 +05:30
Pawan
39d2ee2859 fix(zfspv): fixing mounting issue with xfs
We changed the ubuntu docker image to 20.04 in https://github.com/openebs/zfs-localpv/pull/170,
which has issues with formatting the zvol as xfs file system. The filesystem
formatted with xfs is not able to mount with error : "missing codepage or helper program, or other error".

Reverting back to ubuntu 19.10 to fix this issue.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-09 19:10:36 +05:30
Pawan
21045a5b1f feat(bdd): adding snapshot and clone releated test cases
added snapshot and clone related test cases. Also restructure
the BDD framework to loop through the supported fstypes and perfrom all
the test cases we have.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-07 23:21:20 +05:30
vaniisgh
8bbf3d7d2f
feat(zfspv) Add golint check to travis (#175)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-07 18:21:02 +05:30
Pawan
8b7ad5cb45 fix(btrfs): fixing duplicate UUID issue with btrfs
btrfs, like xfs, needs to generate a new UUID for the
cloned volumes. All the devices with the same UUID will be treated
same for btrfs, so here generating the new UUID for the cloned volumes
using btrfstune command.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-03 21:04:51 +05:30
vaniisgh
a19877e4c0
feat(zfspv): check pod-status in BDD test (#171)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-02 14:07:43 +05:30
Pawan
051f26fe16 feat(btrfs): adding support to have btrfs filesystem for ZFS-LocalPV
Now, applications can use the btrfs file system by mentioning "btrfs"
as fstype in the storageclass :-

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  fstype: "btrfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:59 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
vaniisgh
ac9d6d5729
feat(zfspv) add go lint target (#167)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-30 13:26:12 +05:30
vaniisgh
d0d1664d43
feat(zfspv): move to klog (#166)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-29 12:18:33 +05:30
vaniisgh
54f2b0b9fd
chore(doc): update docs for GO module support (#160)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>>
2020-06-26 17:11:31 +05:30
vaniisgh
13ec77c75e
feat(zfspv): filter grpc logs to reduce the pollution (#161)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-24 21:41:15 +05:30