Commit graph

20 commits

Author SHA1 Message Date
Rahul Grover
a8376796b7
refact(pkg): Removes unused import, variables and functions. (#321)
Signed-off-by: Rahul Grover <rahulgrover99@gmail.com>
2021-05-04 19:57:41 +05:30
Pawan
0e6a02ea74 fix(topo): support old topology key for backward compatibility
Signed-off-by: Pawan <pawan@mayadata.io>
2021-05-04 13:27:51 +05:30
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Pawan Prakash Sharma
68d79d0e0b
feat(zfspv): remove finalizer that is owned by ZFS-LocalPV (#303)
We set the Finalizer to nil while handling the delete event, instead,
we should try to destroy the volume when there are no user finalizers
set. User might have added his own finalizers and we should not try to destroy
the volumes until those user finalizers are removed.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-06 20:03:00 +05:30
Pawan
04f7635b6f feat(provision): try volume creation on all the nodes
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.

The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-02 20:36:37 +05:30
Pawan Prakash Sharma
fb6f1006da
feat(clone): add support for creating the Clone from volume as datasource (#234)
This PR adds the capability to create the Clone from pvc directly

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-clone
spec:
  storageClassName: openebs-snap
  dataSource:
    name: pvc-snap
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
```
The ZFS_LocalPV driver will create one internal snapshot of the name
same as the new volume name and will create a clone out of it. Also,
while destroying the volume the driver will take care of deleting
the created snapshot for the clone.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-11 18:58:25 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
vaniisgh
d0d1664d43
feat(zfspv): move to klog (#166)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-29 12:18:33 +05:30
wiwen
f5ae3ff476
chore(go-lint): fix golint warning (#133)
Fixes several go lint cases reported by go report. 

Signed-off-by: wiwen <shenggxhz@gmail.com>
2020-06-09 14:47:23 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Pawan
49dc99726b fix(topokey): changing topology key to unique name
There are setups where nodename is different than the hostname.
The driver uses the nodename and tries to set the "kubernetes.io/hostname"
node label to the nodename. Which will fail if nodename is not same as
hostname. Here, changing the key to unique name so that the driver can set
that key as node label and also it can not modify/touch the existing node labels.

Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity.
Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as
after the PR https://github.com/openebs/zfs-localpv/pull/94, it supports all the node
labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So
old volumes will work without any issue. Also for the same reason old stoarge classes
which are using "kubernetes.io/hostname" as topology key, will work as that key is supported.

This fixes the issue where the driver was trying to create the PV on the master node
as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid
candidate for provisioning the PV. After changing the key to unique name, since the driver
will not run on master node, so it will not set "openebs.io/nodename" label to this node
hence this node will never become a valid candidate for the provisioning the volume.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-30 14:48:51 +05:30
Pawan Prakash Sharma
fbd4812642
feat(zfspv): adding poolname info to the PV volumeattributes (#80)
Now PV will have poolname/parent-dataset info in volume attributes to help to identify the zpool on which PV has been created.

```
$ kubectl describe pv pvc-22d55c56-0c52-4fd5-894c-1f54c4dac5b7
Name:              pvc-22d55c56-0c52-4fd5-894c-1f54c4dac5b7
Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: zfs.csi.openebs.io
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      openebs-zfspv
Status:            Bound
Claim:             default/pvcname208
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          4Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [pawan-2]
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            zfs.csi.openebs.io
    VolumeHandle:      pvc-22d55c56-0c52-4fd5-894c-1f54c4dac5b7
    ReadOnly:          false
    VolumeAttributes:      openebs.io/poolname=zfspv-pool
                           storage.kubernetes.io/csiProvisionerIdentity=1586765686638-8081-zfs.csi.openebs.io
Events:                <none>
```
Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-14 08:46:35 +05:30
Pawan Prakash Sharma
c4c2278d2f
refactor(crd): move CR from openebs.io to zfs.openebs.io (#70)
Changed the group name from openebs.io to zfs.openebs.io.

Now ZFS Volume CR will look like this : 
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfszvolumes.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: zfsvolumes
    singular: zfsvolume
    kind:ZFSVolume
    shortNames:
    - zfsvol
    - zv
```

Snapshot CR will look like this :
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfssnapshots.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: fssnapshots
    singular: zfssnapshot
    kind: ZFSSnapshot
    shortNames:
    - zfssnapshot
    - zfssnap

```


Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-30 22:12:34 +05:30
Pawan
86e623a369 feat(resize): adding Online volume expansion support for ZFSPV
We can resize the volume by updating the PVC yaml to
the desired size and apply it. The ZFS Driver will take care
of updating the quota in case of dataset. If we are using a
Zvol and have mounted it as ext4 or xfs filesystem, the driver will take
care of expanding the volume via reize2fs/xfs_growfs binaries.

For resize, storageclass that provisions the pvc must suppo
rt resize. We should have allowVolumeExpansion as true in storageclass

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

```

Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-04 18:30:28 +05:30
Pawan
d608dbacd8 feat(analytics): adding google analytics for ZFSPV
Whenever a volume is provisioned and de-provisioned we will send a google event with mainly following details :
1.    pvName (will shown as app title in google analytics)
2.    size of the volume
3.    event type : volume-provision, volume-deprovision
4.    storage type zfs-localpv
5.    replicacount as 1
6.    ClientId as default namespace uuid

Apart from this, we send the event once in 24 hr, which will have some info like number of nodes, node type, kubernetes version etc.

This metric is cotrolled by OPENEBS_IO_ENABLE_ANALYTICS env. We can set it to false if we don't want to send the metrics.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-02 23:00:22 +05:30
Pawan Prakash Sharma
287606b78a
feat(zfspv): adding snapshot and clone support for ZFSPV (#39)
This commits support snapshot and clone commands via CSI driver. User can create snap and clone using the following steps. 

Note:
- Snapshot is created via reconciliation CR
- Cloned volume will be on the same zpool where the snapshot is taken
- Cloned volume will have same properties as source volume. 

-----------------------------------
Create a Snapshotclass
```
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
  name: zfspv-snapclass
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: zfs.csi.openebs.io
deletionPolicy: Delete
```
Once snapshotclass is created, we can use this class to create a Snapshot 
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: zfspv-snap
spec:
  volumeSnapshotClassName: zfspv-snapclass
  source:
    persistentVolumeClaimName: csi-zfspv
```
```
$ kubectl get volumesnapshot
NAME          AGE
zfspv-snap    7m52s
```
```
$ kubectl get volumesnapshot -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
  kind: VolumeSnapshot
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
    - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
    generation: 1
    name: zfspv-snap
    namespace: default
    resourceVersion: "30040"
    selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/zfspv-snap
    uid: 1a5cf166-c599-4f58-9f3c-f1148be47fca
  spec:
    source:
      persistentVolumeClaimName: csi-zfspv
    volumeSnapshotClassName: zfspv-snapclass
  status:
    boundVolumeSnapshotContentName: snapcontent-1a5cf166-c599-4f58-9f3c-f1148be47fca
    creationTime: "2020-01-30T10:31:24Z"
    readyToUse: true
    restoreSize: "0"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```


Openebs resource for the created snapshot 
```
$ kubectl get snap -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: ZFSSnapshot
  metadata:
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 2
    labels:
      kubernetes.io/nodename: pawan-2
      openebs.io/persistent-volume: pvc-18cab7c3-ec5e-4264-8507-e6f7df4c789a
    name: snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    namespace: openebs
    resourceVersion: "30035"
    selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    uid: e29d571c-42b5-4fb7-9110-e1cfc9b96641
  spec:
    capacity: "4294967296"
    fsType: zfs
    ownerNodeID: pawan-2
    poolName: zfspv-pool
    status: Ready
    volumeType: DATASET
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```

Create a clone volume
    
 We can provide a datasource as snapshot name to create a clone volume
    
```yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: zfspv-clone
    spec:
      storageClassName: openebs-zfspv
      dataSource:
        name: zfspv-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi
```
It will create a ZFS clone volume from the mentioned snapshot and create the PV on the same node where original volume is there.
    
Here, As resize is not supported yet, the clone PVC size should match the size of the snapshot.
Also, all the properties from the storageclass will not be considered for the clone case, it will take the properties from the snapshot and create the clone volume. One thing to note here is that, the storageclass in clone PVC should have the same poolname as that of the original volume as across the pool, clone is not supported.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-02-13 13:31:17 +05:30
Pawan Prakash Sharma
a10dedbd5e feat(ZFSPV): volume count based scheduler for ZFSPV (#8)
This is an initial scheduler implementation for ZFS Local PV. 

* adding scheduler as a configurable option
* adding volumeWeightedScheduler as scheduling logic

The volumeWeightedScheduler  will go through all the nodes as per
topology information and it will pick the node which has less
volume provisioned in the given pool.

lets say there are 2 nodes node1 and node2 with below pool configuration :-
```
node1
|
|-----> pool1
|         |
|         |------> pvc1
|         |------> pvc2
|-----> pool2
          |------> pvc3

node2
|
|-----> pool1
|         |
|         |------> pvc4
|-----> pool2
          |------> pvc5
          |------> pvc6
```
So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool1"
```

So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool2"
```
In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-06 21:20:49 +05:30
Pawan Prakash Sharma
d0e97cddb2 adding topology support for zfspv (#7)
This PR adds support to allow the CSI driver to pick up a node matching the  topology specified in the storage class. Admin can specify allowedTopologies in the StorageClass to specify the nodes where the zfs pools are setup

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "zfspv-pool"
provisioner: zfs-localpv
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - gke-zfspv-pawan-default-pool-c8929518-cgd4
      - gke-zfspv-pawan-default-pool-c8929518-dxzc
```

Note: This PR picks up the first node from the list of nodes available.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-01 06:46:04 +05:30
Pawan
9f5cf445df feat(zfs-localpv): initial commit
provisioning and deprovisioning of
the volumes on the node where zfs pool
has already been setup. Pool name and the volume
parameters has to be given in storage class
which will be used to provision the volume.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-09-18 08:44:08 +05:30