Commit graph

72 commits

Author SHA1 Message Date
Stéphane Bidoul
c399c1b522
chore(yaml): add zfsnode-crd.yml to kustomize resources (#365)
Signed-off-by: Stéphane Bidoul <stephane.bidoul@acsone.eu>
2021-07-24 19:16:09 +05:30
Aadhav Vignesh
7036f70496
refactor: fix helm chart description (#361)
- fix badge url in deploy/helm/README.md

Signed-off-by: Aadhav Vignesh <aadhav.n1@gmail.com>
2021-07-19 13:07:39 +05:30
Shubham Bajpai
90cf61c1cb
[stable/zfs-localpv]: update charts to 1.9.0 (#359)
* [stable/zfs-localpv]: update charts to 1.9.0
* bump kind action to v1.2.0

Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-07-16 12:30:17 +05:30
Hugo Renard
a88eb8bf9a
fix(chart): remove zfsNode.podLabels.name from values (#352)
Signed-off-by: Hugo Renard <hugo.renard@protonmail.com>
2021-06-29 15:36:01 +05:30
Suraj Deshmukh
273bf148d4
chore(yaml): fix bunch of typos (#348)
Signed-off-by: Suraj Deshmukh <surajd.service@gmail.com>
2021-06-24 18:39:26 +05:30
Shubham Bajpai
96608c066f
chore(helm): update config default values in README (#346)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-06-15 19:59:31 +05:30
Shubham Bajpai
bac8b57848
[stable/zfs-localpv]: update charts to 1.8.0 (#344)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-06-15 19:10:43 +05:30
Shovan Maity
ce6efdc84b
feat(charts): set default fstype to ext4 (#339)
Set default fstype to ext4 in csi-provisioner. This will be helpful when
fsType is not mention in storageclass.

Signed-off-by: Shovan Maity <shovan.cse91@gmail.com>
2021-06-04 10:33:16 +05:30
Shubham Bajpai
3eb2c9e894
feat(scheduling): add zfs pool capacity tracking (#335)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-05-31 18:59:59 +05:30
Shubham Bajpai
4fce22afb5
[stable/zfs-localpv]: update charts to 1.7.0 (#329)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-05-15 10:49:12 +05:30
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Shubham Bajpai
fb71595d80
fix(helm): update the crds in helm charts (#314)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-04-19 13:21:35 +05:30
shubham
16495fb1ea [stable/zfs-localpv]: update charts to 1.6.0
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-04-16 14:30:44 +05:30
Pawan
04f7635b6f feat(provision): try volume creation on all the nodes
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.

The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-02 20:36:37 +05:30
Pawan Prakash Sharma
8cc56377bd
chore(yaml): fixing autogen yaml description (#301)
Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-01 13:02:51 +05:30
Shubham Bajpai
533e17a9aa
chore(k8s): updated storage and apiextension version to v1 (#299)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-03-31 15:09:48 +05:30
Shubham Bajpai
3162112327
[stable/zfs-localpv]: update charts to 1.5.0 (#296)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-03-16 12:30:24 +05:30
Pawan Prakash Sharma
6ec49df225
fix(restore): adding support to restore in an encrypted pool (#292)
Encrypted pool does not allow the volume to be pre created for the
restore purpose. Here changing the design to do the restore first
and then create the ZFSVolume object which will bind the volume
already created while doing restore.


Signed-off-by: Pawan <pawan@mayadata.io>
2021-03-01 23:56:42 +05:30
Shubham Bajpai
11a1034b0a
[stable/zfs-localpv]: update charts to 1.4.0 (#285)
- update chart version
- update README
- update values.yaml

Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-02-15 17:31:31 +05:30
Prateek Pandey
62e5b57d90
refact(charts): add pod security policy for zfslocalpv charts (#290)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-02-15 15:03:40 +05:30
Shubham Bajpai
36e0f69fd0
chore(operator): update k8s sidecar images to gcr (#284)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-02-05 12:18:41 +05:30
prateekpandey14
8335440d4c [stable/zfs-localpv]: update zfs-localpv charts to 1.3.0
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-01-14 20:50:17 +05:30
Shubham Bajpai
bd6df9b31d
feat(chart): add helm chart for zfs local pv (#247)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-01-07 10:44:45 +05:30
Shubham Bajpai
e0fbce805b
chore(operator): bump k8s csi to latest stable container images (#271)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-01-05 23:42:20 +05:30
Pawan
30a7f2317e fix(kust): removing quay as we are using multiarch docker images
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-04 11:41:18 +05:30
Pawan
e83e051f83 fix(kust): rename the kustomize.yaml file to kustomization.yaml
Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-18 17:53:36 +05:30
Aman Gupta
919a058223
chore(yaml): changing the zfs-driver images to multi-arch docker hub images (#237)
Signed-off-by: Aman Gupta <aman.gupta@mayadata.io>
2020-11-14 12:44:38 +05:30
Pawan
64bc7cb1c9 feat(upgrade): support parallel/faster upgrades for node daemonset
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.

Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-03 12:54:58 +05:30
Pawan Prakash Sharma
f386bfc4ce
feat(kustomize): adding deployment via kustomize (#231)
Signed-off-by: Pawan <pawan@mayadata.io>
2020-10-31 10:06:22 +05:30
Pawan
26968b5394 feat(backup,restore): adding validation for backup and restore
Added a schema validation for backup and restore CR. Also validating
the server address in the backup/restore controller.

Validating the server address as :

^([0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)$

which is :

<any number>.<any number>.<any number>.<any number>:<any number>

Here we are validating just the format of the IP, not validating that IP should be
correct which  will be little more complex. In any case if IP is not correct,
the zfs send will fail, so no need to do complex validation to validate the
correct IP and port.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-30 11:32:32 +05:30
Pawan
c9ea713333 chore(yaml): removing centos yamls from the repo
Now we have the same operator yaml which can work for all
OS distro. We don't need to have OS specific Operator yamls.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-16 21:09:10 +05:30
Pawan
5d05468694 chore(doc): adding docs for backup and restore
Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-15 23:31:54 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
Pawan Prakash Sharma
a5e645b43d
feat(zfspv): mounting the root filesystem to remove the dependency on the Operating system (#204)
* feat(zfspv): mounting the root filesystem to remove the dependency on the OS

We are mounting the individual library to run the zfs
binary inside the ZFS-LocalPV daemonset. The problem with this
is each OS has different sets of libraries. We need to have different
Operator yamls for different OS versions.

Here we are mounting the root directory inside the ZFS-LocalPV daemonset Pod
which does chroot to this path and run the command. As all the libraries will
be available which are present on the host inside the Pod, so we don't need to mount each
library here and also it will work for all the Operating systems.

To be on the safe side, we are mounting the host's root directory
as Readonly filesystem.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding comment for namespace

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-07 21:12:31 +05:30
Pawan
14f237db79 fix(yaml): removing volumeLifecycleModes from the operator yaml
This field was added in Kubernetes 1.16 and it informs Kubernetes about
the volume modes that are supported by the driver. The default is
"Persistent" if it is not used.

This operator yaml will not work on k8s 1.14 and 1.15, since the driver supports
those k8s version so no need to mention volumeLifecycleModes in the operator as
the default is "Persistent".

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-22 23:07:44 +05:30
Pawan
21045a5b1f feat(bdd): adding snapshot and clone releated test cases
added snapshot and clone related test cases. Also restructure
the BDD framework to loop through the supported fstypes and perfrom all
the test cases we have.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-07 23:21:20 +05:30
vaniisgh
8bbf3d7d2f
feat(zfspv) Add golint check to travis (#175)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-07-07 18:21:02 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
Pawan
daa73fa0b8 Revert "feat(yaml): updating v0.8.0 operator yaml to use 0.8.0 image tag"
This reverts commit 2c11af5362.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-17 20:34:28 +05:30
Pawan
2c11af5362 feat(yaml): updating v0.8.0 operator yaml to use 0.8.0 image tag
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-15 15:05:22 +05:30
Pawan
b08a1e2a1f feat(usage): include pvc name in volume events
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 13:05:23 +05:30
Pawan
e558bb52cb feat(centos): adding operator yaml for centos7 and centos8
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 10:35:13 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
d47ec3ba01 feat(print): removing unnecessary printer columns
Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 19:47:38 +05:30
Pawan
57ef10cb71 fix(zfspv): changing image pull policy to IfNotPresent
Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 09:17:45 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Pawan
2f19a6674b fix(image): updating the screneshot with new dashboard
Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-15 21:42:27 +05:30
Pawan Prakash Sharma
1045f1daa1
feat(grafana): adding basic grafana dashboard (#110)
adding grafana dashboard for ZFS Local PV that shows the following metrics:

- Volume Capacity (used space percentage)
- ARC Size, Hits, Misses
- L2ARC Size, Hits, Misses
- ZPOOL Read/Write IOs
- ZPOOL Read/Write time

This dashboard was inspired by https://grafana.com/grafana/dashboards/7845

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-15 14:39:16 +05:30
Pawan Prakash Sharma
dd059a2f43
feat(block): adding block volume support for ZFSPV (#102)
This commit adds the support for creating a Raw Block Volume request using volumemode as block in PVC :-

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: block-claim
spec:
  volumeMode: Block
  storageClassName: zfspv-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
```

The driver will create a zvol for this volume and bind mount the block device at the given path.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-05 12:28:46 +05:30
Pawan Prakash Sharma
de9b302083
feat(topology): adding support for custom topology keys (#94)
This commit adds the support for use to specify custom labels to the kubernetes nodes and use them in the allowedToplogoies section of the StorageClass. 

Few notes:
- This PR depends on the CSI driver's capability to support custom topology keys. 
- label on the nodes should be added first and then deploy the driver to make it aware of
all the labels that node has. If labels are added after ZFS-LocalPV driver
has been deployed, a restart all the node csi driver agents is required so that the driver
can pick the labels and add them as supported topology keys.
- if storageclass is using Immediate binding mode and topology key is not mentioned
then all the nodes should be labeled using same key, that means:
  - same key should be present on all nodes, nodes can have different values for those keys. 
  - If nodes are labeled with different keys i.e. some nodes are having different keys, then ZFSPV's default scheduler can not effictively do the volume count based scheduling. In this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-30 14:13:29 +05:30