Commit graph

51 commits

Author SHA1 Message Date
Abhishek Agarwal
d7115eefe9
chore(analytics): Send install & ping events on zfs-localpv start (#386)
Signed-off-by: Abhishek Agarwal <abhishek.agarwal@mayadata.io>
2021-09-16 16:19:42 +05:30
Pawan Prakash Sharma
9b907fa884
fix(crash): reuse the err variable (#353)
The error variable is being redeclared, so this is not
accessible outside of the loop and the provisioner is crashing
while return in case of error as it tries to access err.Error
and err is nil outside of the loop.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-06-25 11:44:24 +05:30
Abhinandan Purkait
83f24628ab
fix(cas): add cas-type key for zfs under volume attributes (#338)
Signed-off-by: Abhinandan-Purkait <abhinandan.purkait@mayadata.io>
2021-06-04 07:32:24 +05:30
Shubham Bajpai
3eb2c9e894
feat(scheduling): add zfs pool capacity tracking (#335)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2021-05-31 18:59:59 +05:30
Atibhi Agrawal
0f677b6afd
refact(err): fix unhandled errors (#327)
Signed-off-by: aSquare14 <atibhi.a@gmail.com>
2021-05-06 19:28:32 +05:30
Sonia Singla
73d9580817
refact(deadcode): Fix deadcode and shebangs error (#324)
Signed-off-by: Sonia Singla <soniasingla.1812@gmail.com>
2021-05-06 19:14:23 +05:30
Atibhi Agrawal
137572552e
refact(error): handle errors (#326)
Signed-off-by: aSquare14 <atibhi.a@gmail.com>
2021-05-06 17:07:43 +05:30
Rahul Grover
a8376796b7
refact(pkg): Removes unused import, variables and functions. (#321)
Signed-off-by: Rahul Grover <rahulgrover99@gmail.com>
2021-05-04 19:57:41 +05:30
Pawan
0e6a02ea74 fix(topo): support old topology key for backward compatibility
Signed-off-by: Pawan <pawan@mayadata.io>
2021-05-04 13:27:51 +05:30
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Pawan
04f7635b6f feat(provision): try volume creation on all the nodes
Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.

The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-04-02 20:36:37 +05:30
Prateek Pandey
b1aa6ab51a
refact(deps): bump k8s and client-go deps to version v0.20.2 (#294)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-03-31 16:43:42 +05:30
Pawan
88ad25ec9c feat(resize): adding resize support for raw block volumes
Signed-off-by: Pawan <pawan@mayadata.io>
2021-02-02 12:44:02 +05:30
Pawan
90ecfe9c73 feat(schd): adding capacity weighted scheduler
The ZFS Driver will use capacity scheduler to pick a node
which has less capacity occupied by the volumes. Making this
as default scheduler as it is better than the volume count based
scheduling. We can use below storageclass to specify the scheduler
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: openebs-zfspv
allowVolumeExpansion: true
parameters:
 scheduler: "CapacityWeighted"
 poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Please Note that after the upgrade, there will be a change in the behavior.
If we are not using `scheduler` parameter in the storage class then after
the upgrade ZFS Driver will pick the node bases on volume capacity weight
instead of the count.

Signed-off-by: Pawan <pawan@mayadata.io>
2021-01-07 10:38:44 +05:30
praveengt
0e3098920c
fix(build): Cross Build enviroment bug fixes (#264)
- Adding typecasting to make compilation work under MAC build environment

- Using go env variable instead of uname for determining platform

Signed-off-by: praveengt <praveen.gt@flipkart.com>
2020-12-21 11:52:36 +05:30
Shubham Bajpai
2906d39d94
refact(csi): use common lib-csi imports (#263)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2020-12-18 21:12:52 +05:30
Pawan
0409fca095 fix(sanity): fixing flaky sanity test case
Also moving to bionic docker image for github action also.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-10 20:06:45 +05:30
Pawan Prakash Sharma
a73a59fd49
feat(sanity): adding CSI Sanity test (#232)
* adding CSI Sanity test for ZFS-LocalPV
* make lowercase at all the places

Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-10 11:53:16 +05:30
Pawan
e1e8aa5839 chore(refactor): refactor scheduler for ZFS-LocalPV
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-07 16:04:12 +05:30
Pawan
d537bd3655 chore(refactor): move xfs and mount code out of zfs package
Signed-off-by: Pawan <pawan@mayadata.io>
2020-12-02 12:20:59 +05:30
Pawan Prakash Sharma
fb6f1006da
feat(clone): add support for creating the Clone from volume as datasource (#234)
This PR adds the capability to create the Clone from pvc directly

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-clone
spec:
  storageClassName: openebs-snap
  dataSource:
    name: pvc-snap
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
```
The ZFS_LocalPV driver will create one internal snapshot of the name
same as the new volume name and will create a clone out of it. Also,
while destroying the volume the driver will take care of deleting
the created snapshot for the clone.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-11-11 18:58:25 +05:30
Gagandeep Singh
3da4f7308e
chore(refactor): Remove MountInfo struct from api (#225)
Signed-off-by: Gagandeep Singh <codegagan@gmail.com>
2020-10-12 10:59:23 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
Pawan Prakash Sharma
b1b69ebfe7
fix(zfspv): rounding off the volume size to Gi and Mi (#191)
ZFS does not create the zvol if volume size is not multiple of
the volblocksize. There are use cases where customer will create
a PVC with size as 5G, which will be 5 * 1000 * 1000 * 1000 bytes
and this is not the multiple of default volblocksize 8k.

In ZFS, volblocksize and recordsize must be power of 2 from 512B to 1M,
so keeping the size in the form of Gi or Mi should be
sufficient to make volsize multiple of volblocksize/recordsize.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-08-07 20:50:13 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
vaniisgh
d0d1664d43
feat(zfspv): move to klog (#166)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-29 12:18:33 +05:30
vaniisgh
13ec77c75e
feat(zfspv): filter grpc logs to reduce the pollution (#161)
Signed-off-by: vaniisgh <vanisingh@live.co.uk>
2020-06-24 21:41:15 +05:30
wiwen
f5ae3ff476
chore(go-lint): fix golint warning (#133)
Fixes several go lint cases reported by go report. 

Signed-off-by: wiwen <shenggxhz@gmail.com>
2020-06-09 14:47:23 +05:30
Pawan
b08a1e2a1f feat(usage): include pvc name in volume events
Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 13:05:23 +05:30
Pawan
45015bf063 fix(pvc): fixing stale ZFSVolume CR issue when deleting pending PVC
PVC will not bound if there are wrong parameters/poolname in the storageclass,
the ZFSVolume CR will be still created and will remain in Pending State,
deletion of the PVC will delete PVC and since PVC is not bound, ZFS-LocalPV
driver will not get the delete call and will leave the ZFSVolume CR hanging there.
Reverting the behavior introduced in https://github.com/openebs/zfs-localpv/pull/121,
Now PVC will be bound but still ZFSVolume will be in Pending state until the volume is created.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-08 10:31:39 +05:30
Christopher J. Ruwe
377b881653 make character case for keys in parameters map irrelevant, fixing #143
More specifically,
- introduce helper function to get maps with all keys set to lowercase,
- introduce lookup helper based on such maps and
- change lookups for CreateVolumeRequest()s and CreateVolume()s so that
  parameter keys are processed as lowercase irrespective of actual
  spelling.

Signed-off-by: Christopher J. Ruwe <cjr@cruwe.de>
2020-06-04 19:25:05 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
42ed7d85ee fix(readonly): honouring readonly flag.
Readonly flag does not come as mount option, it has
separate field to mention readonly flag. ZFS-LocalPV
driver should check that field and add "ro" as mountoption.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-27 21:20:53 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Pawan Prakash Sharma
dd059a2f43
feat(block): adding block volume support for ZFSPV (#102)
This commit adds the support for creating a Raw Block Volume request using volumemode as block in PVC :-

```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: block-claim
spec:
  volumeMode: Block
  storageClassName: zfspv-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
```

The driver will create a zvol for this volume and bind mount the block device at the given path.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-05 12:28:46 +05:30
Pawan Prakash Sharma
de9b302083
feat(topology): adding support for custom topology keys (#94)
This commit adds the support for use to specify custom labels to the kubernetes nodes and use them in the allowedToplogoies section of the StorageClass. 

Few notes:
- This PR depends on the CSI driver's capability to support custom topology keys. 
- label on the nodes should be added first and then deploy the driver to make it aware of
all the labels that node has. If labels are added after ZFS-LocalPV driver
has been deployed, a restart all the node csi driver agents is required so that the driver
can pick the labels and add them as supported topology keys.
- if storageclass is using Immediate binding mode and topology key is not mentioned
then all the nodes should be labeled using same key, that means:
  - same key should be present on all nodes, nodes can have different values for those keys. 
  - If nodes are labeled with different keys i.e. some nodes are having different keys, then ZFSPV's default scheduler can not effictively do the volume count based scheduling. In this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-30 14:13:29 +05:30
Pawan Prakash Sharma
fbd4812642
feat(zfspv): adding poolname info to the PV volumeattributes (#80)
Now PV will have poolname/parent-dataset info in volume attributes to help to identify the zpool on which PV has been created.

```
$ kubectl describe pv pvc-22d55c56-0c52-4fd5-894c-1f54c4dac5b7
Name:              pvc-22d55c56-0c52-4fd5-894c-1f54c4dac5b7
Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: zfs.csi.openebs.io
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      openebs-zfspv
Status:            Bound
Claim:             default/pvcname208
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          4Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [pawan-2]
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            zfs.csi.openebs.io
    VolumeHandle:      pvc-22d55c56-0c52-4fd5-894c-1f54c4dac5b7
    ReadOnly:          false
    VolumeAttributes:      openebs.io/poolname=zfspv-pool
                           storage.kubernetes.io/csiProvisionerIdentity=1586765686638-8081-zfs.csi.openebs.io
Events:                <none>
```
Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-14 08:46:35 +05:30
Pawan
3a1a8e78e6 feat(zfspv): handling unmounted volume
There can be cases where openebs namespace has been accidently deleted (Optoro case: https://mdap.zendesk.com/agent/tickets/963), There the driver attempted to destroy the dataset which will first umount the dataset and then try to destroy it, the destroy will fail as volume is busy. Here, as mentioned in the steps to recover, we have to manually mount the dataset
```
6. The driver might have attempted to destroy the volume before going down, which sets the mount as no(this strange behavior on gke ubuntu 18.04), we have to mount the dataset, go to the each node and check if there is any unmounted volume
zfs get mounted
if there is any unmounted dataset with this option as "no", we should do the below :-
mountpath=zfs get -Hp -o value mountpoint <dataset name>
zfs set mountpoint=none
zfs set mountpoint=<mountpath>
this will set the dataset to be mounted.
```

So in this case the volume will be  unmounted and still mountpoint will set to the mountpath, so if application pod is deleted later on, it will try to mount the zfs dataset, here just setting the `mountpoint` is not sufficient, as if we have unmounted the zfs dataset (via zfs destroy in this case), so we have to explicitely mount the dataset **otherwise application will start running without any persistence storage**. Here automating the manual steps performed to resolve the problem, we are checking in the code that if zfs dataset is not mounted after setting the mountpoint property, attempt to mount it.

This is not the case with the zvol as it does not attempt to unmount it, so zvols are fine.

Also NodeUnPublish operation MUST be idempotent. If this RPC failed, or the CO does not know if it failed or not, it can choose to call NudeUnPublishRequest again. So handled this and returned successful if volume is not mounted also added descriptive error messages at few places.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-04-09 20:53:10 +05:30
Pawan Prakash Sharma
c4c2278d2f
refactor(crd): move CR from openebs.io to zfs.openebs.io (#70)
Changed the group name from openebs.io to zfs.openebs.io.

Now ZFS Volume CR will look like this : 
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfszvolumes.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: zfsvolumes
    singular: zfsvolume
    kind:ZFSVolume
    shortNames:
    - zfsvol
    - zv
```

Snapshot CR will look like this :
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfssnapshots.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: fssnapshots
    singular: zfssnapshot
    kind: ZFSSnapshot
    shortNames:
    - zfssnapshot
    - zfssnap

```


Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-30 22:12:34 +05:30
Pawan
0e75d89c64 fix(clone): setting properties on the clone volume
Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-12 22:03:58 +05:30
Pawan
86e623a369 feat(resize): adding Online volume expansion support for ZFSPV
We can resize the volume by updating the PVC yaml to
the desired size and apply it. The ZFS Driver will take care
of updating the quota in case of dataset. If we are using a
Zvol and have mounted it as ext4 or xfs filesystem, the driver will take
care of expanding the volume via reize2fs/xfs_growfs binaries.

For resize, storageclass that provisions the pvc must suppo
rt resize. We should have allowVolumeExpansion as true in storageclass

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

```

Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-04 18:30:28 +05:30
Pawan
d608dbacd8 feat(analytics): adding google analytics for ZFSPV
Whenever a volume is provisioned and de-provisioned we will send a google event with mainly following details :
1.    pvName (will shown as app title in google analytics)
2.    size of the volume
3.    event type : volume-provision, volume-deprovision
4.    storage type zfs-localpv
5.    replicacount as 1
6.    ClientId as default namespace uuid

Apart from this, we send the event once in 24 hr, which will have some info like number of nodes, node type, kubernetes version etc.

This metric is cotrolled by OPENEBS_IO_ENABLE_ANALYTICS env. We can set it to false if we don't want to send the metrics.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-02 23:00:22 +05:30
Pawan Prakash Sharma
287606b78a
feat(zfspv): adding snapshot and clone support for ZFSPV (#39)
This commits support snapshot and clone commands via CSI driver. User can create snap and clone using the following steps. 

Note:
- Snapshot is created via reconciliation CR
- Cloned volume will be on the same zpool where the snapshot is taken
- Cloned volume will have same properties as source volume. 

-----------------------------------
Create a Snapshotclass
```
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
  name: zfspv-snapclass
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: zfs.csi.openebs.io
deletionPolicy: Delete
```
Once snapshotclass is created, we can use this class to create a Snapshot 
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: zfspv-snap
spec:
  volumeSnapshotClassName: zfspv-snapclass
  source:
    persistentVolumeClaimName: csi-zfspv
```
```
$ kubectl get volumesnapshot
NAME          AGE
zfspv-snap    7m52s
```
```
$ kubectl get volumesnapshot -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
  kind: VolumeSnapshot
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
    - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
    generation: 1
    name: zfspv-snap
    namespace: default
    resourceVersion: "30040"
    selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/zfspv-snap
    uid: 1a5cf166-c599-4f58-9f3c-f1148be47fca
  spec:
    source:
      persistentVolumeClaimName: csi-zfspv
    volumeSnapshotClassName: zfspv-snapclass
  status:
    boundVolumeSnapshotContentName: snapcontent-1a5cf166-c599-4f58-9f3c-f1148be47fca
    creationTime: "2020-01-30T10:31:24Z"
    readyToUse: true
    restoreSize: "0"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```


Openebs resource for the created snapshot 
```
$ kubectl get snap -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: ZFSSnapshot
  metadata:
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 2
    labels:
      kubernetes.io/nodename: pawan-2
      openebs.io/persistent-volume: pvc-18cab7c3-ec5e-4264-8507-e6f7df4c789a
    name: snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    namespace: openebs
    resourceVersion: "30035"
    selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    uid: e29d571c-42b5-4fb7-9110-e1cfc9b96641
  spec:
    capacity: "4294967296"
    fsType: zfs
    ownerNodeID: pawan-2
    poolName: zfspv-pool
    status: Ready
    volumeType: DATASET
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```

Create a clone volume
    
 We can provide a datasource as snapshot name to create a clone volume
    
```yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: zfspv-clone
    spec:
      storageClassName: openebs-zfspv
      dataSource:
        name: zfspv-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi
```
It will create a ZFS clone volume from the mentioned snapshot and create the PV on the same node where original volume is there.
    
Here, As resize is not supported yet, the clone PVC size should match the size of the snapshot.
Also, all the properties from the storageclass will not be considered for the clone case, it will take the properties from the snapshot and create the clone volume. One thing to note here is that, the storageclass in clone PVC should have the same poolname as that of the original volume as across the pool, clone is not supported.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-02-13 13:31:17 +05:30
Pawan
820d0800cd feat(volstats): return volstats for path if it is a mountpath
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:39:35 +05:30
Pawan
1e5c81d2ac feat(volstats): adding client side fs stats
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:39:35 +05:30
Pawan
523e862159 refactor(zfspv): renamed watcher to mgmt package
as it does the management task also corrected few logs
and renamed zvol to zfs(as we support zvol and dataset both)

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-26 21:38:32 +05:30
Pawan Prakash Sharma
68db6d2774 feat(ZFSPV): adding support for applications to create "zfs" flesystem (#15)
Application can now create a storageclass to create zfs filesystem

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv5
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

ZFSPV was supporting ext2/3/4 and xfs filesystem only which
adds one extra filesystem layer on top of ZFS filesystem. So now
we can driectly write to the ZFS filesystem and get the optimal performance
by directly creating ZFS filesystem for storage.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-21 19:00:15 +05:30
Pawan Prakash Sharma
a10dedbd5e feat(ZFSPV): volume count based scheduler for ZFSPV (#8)
This is an initial scheduler implementation for ZFS Local PV. 

* adding scheduler as a configurable option
* adding volumeWeightedScheduler as scheduling logic

The volumeWeightedScheduler  will go through all the nodes as per
topology information and it will pick the node which has less
volume provisioned in the given pool.

lets say there are 2 nodes node1 and node2 with below pool configuration :-
```
node1
|
|-----> pool1
|         |
|         |------> pvc1
|         |------> pvc2
|-----> pool2
          |------> pvc3

node2
|
|-----> pool1
|         |
|         |------> pvc4
|-----> pool2
          |------> pvc5
          |------> pvc6
```
So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool1"
```

So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool2"
```
In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-06 21:20:49 +05:30
Pawan Prakash Sharma
d0e97cddb2 adding topology support for zfspv (#7)
This PR adds support to allow the CSI driver to pick up a node matching the  topology specified in the storage class. Admin can specify allowedTopologies in the StorageClass to specify the nodes where the zfs pools are setup

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "zfspv-pool"
provisioner: zfs-localpv
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - gke-zfspv-pawan-default-pool-c8929518-cgd4
      - gke-zfspv-pawan-default-pool-c8929518-dxzc
```

Note: This PR picks up the first node from the list of nodes available.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-01 06:46:04 +05:30
Pawan Prakash Sharma
0218dacea0 feat(ZFSPV): adding encryption in ZFSVolume CR (#6)
Adding support for enabling encryption using a custom key. 

Also, adding support to inherit the properties from ZPOOL
which are not listed in the storage class, ZFS driver will
not pass default values while creating the volume. Those
properties will be inherited from the ZPOOL.

we can use the encryption option in storage class 
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  encryption: "on"
  keyformat: "raw"
  keylocation: "file:///home/keys/key"
  poolname: "zfspv-pool"
provisioner: openebs.io/zfs
```

Just a note, the key file should be mounted inside the node-agent container so that we can use that file while provisioning the volume. keyformat can be raw, hex or passphrase.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-10-15 22:51:48 +05:30