Commit graph

22 commits

Author SHA1 Message Date
Shubham Bajpai
c0343d0480
feat(deps): Bump golang, k8s and lib-csi versions (#444)
- go: 1.19.9
- k8s: v1.27.2
- klog: v2
- lib-csi: v0.7.0

Signed-off-by: shubham <shubham14bajpai@gmail.com>
2023-05-28 22:35:03 +05:30
Pawan Prakash Sharma
65ef14d479
feat(zfs-2.0): adding zstd compression in the validation list (#401)
* feat(zfs-2.0): adding zstd compression in the validation list
* updating action go version to 1.16.5

Signed-off-by: Pawan <pawan@mayadata.io>
2021-11-29 22:04:39 +05:30
Shubham Bajpai
a6462c5234
fix(provisioning): register topologyKeys from driver env (#395)
Signed-off-by: shubham <shubham14bajpai@gmail.com>
2021-10-12 19:39:47 +05:30
Suraj Deshmukh
273bf148d4
chore(yaml): fix bunch of typos (#348)
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
2021-06-24 18:39:26 +05:30
Shubham Bajpai
3eb2c9e894
feat(scheduling): add zfs pool capacity tracking (#335)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-05-31 18:59:59 +05:30
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Pawan
04f7635b6f feat(provision): try volume creation on all the nodes
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.

The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-02 20:36:37 +05:30
Pawan Prakash Sharma
6ec49df225
fix(restore): adding support to restore in an encrypted pool (#292)
Encrypted pool does not allow the volume to be pre created for the
restore purpose. Here changing the design to do the restore first
and then create the ZFSVolume object which will bind the volume
already created while doing restore.


Signed-off-by: Pawan <pawan@mayadata.io>
2021-03-01 23:56:42 +05:30
Gagandeep Singh
3da4f7308e
chore(refactor): Remove MountInfo struct from api (#225)
Signed-off-by: Gagandeep Singh <codegagan@gmail.com>
2020-10-12 10:59:23 +05:30
Pawan
26968b5394 feat(backup,restore): adding validation for backup and restore
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.

Validating the server address as :

^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$

which is :

<any number>.<any number>.<any number>.<any number>:<any number>

Here we are validating just the format of the IP, not validating that IP should be
correct which  will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-30 11:32:32 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
vaniisgh
8bbf3d7d2f
feat(zfspv) Add golint check to travis (#175)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-07 18:21:02 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
prateekpandey14
fa76b346a0 feat(modules): migrate to go modules and bump go version 1.14.4
- migrate to go module
- bump go version 1.14.4

Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2020-06-09 22:27:01 +05:30
Pawan
1e23607d8a feat(beta): autogen code for v1 CRDs
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
42ed7d85ee fix(readonly): honouring readonly flag.
Readonly flag does not come as mount option, it has
separate field to mention readonly flag. ZFS-LocalPV
driver should check that field and add "ro" as mountoption.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-27 21:20:53 +05:30
Pawan
d47ec3ba01 feat(print): removing unnecessary printer columns
Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 19:47:38 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Pawan Prakash Sharma
ae724ee096
feat(validation): adding validation for ZFSPV CR parameters (#66)
Validating few parameters for the ZFSVolume custom resource

- compression can be "on", "off", "lzjb", "gzip", "gzip-[1-9]", "zle" and "lz4"
- encryption can be "on", "off", "aes-128-ccm", "aes-192-ccm", "aes-256-ccm", "aes-128-gcm", "aes-192-gcm", and "aes-256-gcm"
- dedup can be "on" and "off"
- poolname can be string
- ownernodeid can be string
- thinprovision can be "yes" and "no"
- volumetype can be "DATASET" and "ZVOL"

Also added required fields needed to create ZFSVolume CR
- ownerNodeID
- poolname
- volumeType
- capacity


Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-14 17:26:46 +05:30
Prateek Pandey
6033789c17
feat(crd-gen): automate the CRDs generation with validations for APIs (#75)
- To generate the CRD spec `make manifest` generate then under
  deploy/yamls directory
- added a update-crd script to automate the steps to generate
  CRDs and its validation of each types

Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2020-04-01 17:54:20 +05:30
Pawan Prakash Sharma
c4c2278d2f
refactor(crd): move CR from openebs.io to zfs.openebs.io (#70)
Changed the group name from openebs.io to zfs.openebs.io.

Now ZFS Volume CR will look like this : 
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfszvolumes.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: zfsvolumes
    singular: zfsvolume
    kind:ZFSVolume
    shortNames:
    - zfsvol
    - zv
```

Snapshot CR will look like this :
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfssnapshots.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: fssnapshots
    singular: zfssnapshot
    kind: ZFSSnapshot
    shortNames:
    - zfssnapshot
    - zfssnap

```


Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-30 22:12:34 +05:30