Commit graph

89 commits

Author SHA1 Message Date
Jesse Nelson
884540e57b Add regex for zstd levels
Signed-off-by: Jesse Nelson <jesse@swirldslabs.com>
Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
2023-07-26 09:43:40 +00:00
Jesse Nelson
5da343aac1 initialize values
Signed-off-by: Jesse Nelson <jesse@swirldslabs.com>
Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
2023-07-26 09:41:30 +00:00
Jesse Nelson
347d92a16f add support for providing additional volumes and adding init containers
Signed-off-by: Jesse Nelson <jesse@swirldslabs.com>
Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
2023-07-26 09:41:13 +00:00
Niladri Halder
b70fb1e847 feat(deploy): v2.2.0 changes
- update sidecar container registry to registry.k8s.io
- update helm chart

Signed-off-by: Niladri Halder <niladri.halder26@gmail.com>
2023-05-29 14:40:13 +05:30
Pawan Prakash Sharma
8adedda7b6
feat(helm): adding 2.1.0 helm chart (#409)
Signed-off-by: Pawan <pawan@mayadata.io>
2022-04-18 20:08:53 +05:30
Shubham Bajpai
ea246090af
[stable/zfs-localpv]: update charts to 2.0.0 (#396)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2022-01-11 16:05:30 +05:30
Pawan Prakash Sharma
65ef14d479
feat(zfs-2.0): adding zstd compression in the validation list (#401)
* feat(zfs-2.0): adding zstd compression in the validation list
* updating action go version to 1.16.5

Signed-off-by: Pawan <pawan@mayadata.io>
2021-11-29 22:04:39 +05:30
Shubham Bajpai
a6462c5234
fix(provisioning): register topologyKeys from driver env (#395)
Signed-off-by: shubham <shubham14bajpai@gmail.com>
2021-10-12 19:39:47 +05:30
Kiran Mova
f0f4da3eaf
chore(release): bump up the release to 1.9.3 (#388)
update app(driver) and chart verisons to 1.9.3 and 1.9.8
respectively.

Signed-off-by: kmova <kiran.mova@mayadata.io>
2021-09-18 00:34:25 +05:30
Kiran Mova
9d2966057a
chore(ci): updating branch reference from master to develop(HEAD) (#384) (#385)
* chore(ci): updating branch reference from master to develop(HEAD) (#384)


Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
Co-authored-by: sai chaithanya <sai.chaithanya@mayadata.io>
2021-09-15 18:58:12 +05:30
Travis Athougies
95428184cc
feat(helm): Allow specifying path to zfs binary (#383)
* Allow specifying path to zfs binary

Some linux/UNIX distributions, do not follow standard path conventions. The driver currently assumes the zfs binary is in /sbin or /usr/sbin, but on NixOS, for example, it's in /run/current-system/sw/bin.

This adds an option to specify the directory manually.

* Bump chart version


Signed-off-by: Travis Athougies <travis@athougies.net>
2021-09-15 18:23:50 +05:30
Prateek Pandey
bf437b9cc3
chore(helm): bump csi sidecars and add storagecapacity in csidriver (#377)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-09-03 14:07:35 +05:30
Aman Gupta
5030cb4acf
Update(deploy): Update the csi-provisioner version to v3.0.0 (#374)
Signed-off-by: w3aman <aman.gupta@mayadata.io>
2021-09-01 11:29:54 +05:30
prateekpandey14
16f14c33ec fix(templates): update csi driver templates with priorityclass
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-08-30 17:56:37 +05:30
Shubham Bajpai
fefbc5b30a
[stable/zfs-localpv]: update charts to 1.9.1 (#373)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-08-18 20:12:17 +05:30
Prateek Pandey
95d5d3a8d3
refact(operator): update zfs operator with custom priorityclass (#367)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-08-09 19:44:22 +05:30
Prateek Pandey
7256a6d65c
[stable/zfs]: add custom priorityclass template for zfs charts (#363)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-07-28 12:39:49 +05:30
Stéphane Bidoul
c399c1b522
chore(yaml): add zfsnode-crd.yml to kustomize resources (#365)
Signed-off-by: Stéphane Bidoul <stephane.bidoul@acsone.eu>
2021-07-24 19:16:09 +05:30
Aadhav Vignesh
7036f70496
refactor: fix helm chart description (#361)
- fix badge url in deploy/helm/README.md

Signed-off-by: Aadhav Vignesh <aadhav.n1@gmail.com>
2021-07-19 13:07:39 +05:30
Shubham Bajpai
90cf61c1cb
[stable/zfs-localpv]: update charts to 1.9.0 (#359)
* [stable/zfs-localpv]: update charts to 1.9.0
* bump kind action to v1.2.0

Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-07-16 12:30:17 +05:30
Hugo Renard
a88eb8bf9a
fix(chart): remove zfsNode.podLabels.name from values (#352)
Signed-off-by: Hugo Renard <hugo.renard@protonmail.com>
2021-06-29 15:36:01 +05:30
Suraj Deshmukh
273bf148d4
chore(yaml): fix bunch of typos (#348)
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
2021-06-24 18:39:26 +05:30
Shubham Bajpai
96608c066f
chore(helm): update config default values in README (#346)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-06-15 19:59:31 +05:30
Shubham Bajpai
bac8b57848
[stable/zfs-localpv]: update charts to 1.8.0 (#344)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-06-15 19:10:43 +05:30
Shovan Maity
ce6efdc84b
feat(charts): set default fstype to ext4 (#339)
Set default fstype to ext4 in csi-provisioner. This will be helpful when
fsType is not mention in storageclass.

Signed-off-by: Shovan Maity <shovan.cse91@gmail.com>
2021-06-04 10:33:16 +05:30
Shubham Bajpai
3eb2c9e894
feat(scheduling): add zfs pool capacity tracking (#335)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-05-31 18:59:59 +05:30
Shubham Bajpai
4fce22afb5
[stable/zfs-localpv]: update charts to 1.7.0 (#329)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-05-15 10:49:12 +05:30
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Shubham Bajpai
fb71595d80
fix(helm): update the crds in helm charts (#314)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-04-19 13:21:35 +05:30
shubham
16495fb1ea [stable/zfs-localpv]: update charts to 1.6.0
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-04-16 14:30:44 +05:30
Pawan
04f7635b6f feat(provision): try volume creation on all the nodes
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.

The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-02 20:36:37 +05:30
Pawan Prakash Sharma
8cc56377bd
chore(yaml): fixing autogen yaml description (#301)
Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-01 13:02:51 +05:30
Shubham Bajpai
533e17a9aa
chore(k8s): updated storage and apiextension version to v1 (#299)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-03-31 15:09:48 +05:30
Shubham Bajpai
3162112327
[stable/zfs-localpv]: update charts to 1.5.0 (#296)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-03-16 12:30:24 +05:30
Pawan Prakash Sharma
6ec49df225
fix(restore): adding support to restore in an encrypted pool (#292)
Encrypted pool does not allow the volume to be pre created for the
restore purpose. Here changing the design to do the restore first
and then create the ZFSVolume object which will bind the volume
already created while doing restore.


Signed-off-by: Pawan <pawan@mayadata.io>
2021-03-01 23:56:42 +05:30
Shubham Bajpai
11a1034b0a
[stable/zfs-localpv]: update charts to 1.4.0 (#285)
- update chart version
- update README
- update values.yaml

Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-02-15 17:31:31 +05:30
Prateek Pandey
62e5b57d90
refact(charts): add pod security policy for zfslocalpv charts (#290)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-02-15 15:03:40 +05:30
Shubham Bajpai
36e0f69fd0
chore(operator): update k8s sidecar images to gcr (#284)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-02-05 12:18:41 +05:30
prateekpandey14
8335440d4c [stable/zfs-localpv]: update zfs-localpv charts to 1.3.0
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-01-14 20:50:17 +05:30
Shubham Bajpai
bd6df9b31d
feat(chart): add helm chart for zfs local pv (#247)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-01-07 10:44:45 +05:30
Shubham Bajpai
e0fbce805b
chore(operator): bump k8s csi to latest stable container images (#271)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-01-05 23:42:20 +05:30
Pawan
30a7f2317e fix(kust): removing quay as we are using multiarch docker images
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-04 11:41:18 +05:30
Pawan
e83e051f83 fix(kust): rename the kustomize.yaml file to kustomization.yaml
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-18 17:53:36 +05:30
Aman Gupta
919a058223
chore(yaml): changing the zfs-driver images to multi-arch docker hub images (#237)
Signed-off-by: Aman Gupta <aman.gupta@mayadata.io>
2020-11-14 12:44:38 +05:30
Pawan
64bc7cb1c9 feat(upgrade): support parallel/faster upgrades for node daemonset
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.

Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-03 12:54:58 +05:30
Pawan Prakash Sharma
f386bfc4ce
feat(kustomize): adding deployment via kustomize (#231)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-31 10:06:22 +05:30
Pawan
26968b5394 feat(backup,restore): adding validation for backup and restore
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.

Validating the server address as :

^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$

which is :

<any number>.<any number>.<any number>.<any number>:<any number>

Here we are validating just the format of the IP, not validating that IP should be
correct which  will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-30 11:32:32 +05:30
Pawan
c9ea713333 chore(yaml): removing centos yamls from the repo
Now we have the same operator yaml which can work for all
OS distro. We don't need to have OS specific Operator yamls.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-16 21:09:10 +05:30
Pawan
5d05468694 chore(doc): adding docs for backup and restore
Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-15 23:31:54 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30