Commit graph

304 commits

Author SHA1 Message Date
Pawan
fd2ec40fb5 chore(doc): adding resize details in README
Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-06 14:02:23 +05:30
Pawan
7178387c1e feat(resize): adding BDD test for Online volume expansion
Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-06 10:25:18 +05:30
Pawan
86e623a369 feat(resize): adding Online volume expansion support for ZFSPV
We can resize the volume by updating the PVC yaml to
the desired size and apply it. The ZFS Driver will take care
of updating the quota in case of dataset. If we are using a
Zvol and have mounted it as ext4 or xfs filesystem, the driver will take
care of expanding the volume via reize2fs/xfs_growfs binaries.

For resize, storageclass that provisions the pvc must suppo
rt resize. We should have allowVolumeExpansion as true in storageclass

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

```

Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-04 18:30:28 +05:30
Pawan
dc5edb901c feat(analytics): vendor code for google analytics
Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-02 23:00:22 +05:30
Pawan
d608dbacd8 feat(analytics): adding google analytics for ZFSPV
Whenever a volume is provisioned and de-provisioned we will send a google event with mainly following details :
1.    pvName (will shown as app title in google analytics)
2.    size of the volume
3.    event type : volume-provision, volume-deprovision
4.    storage type zfs-localpv
5.    replicacount as 1
6.    ClientId as default namespace uuid

Apart from this, we send the event once in 24 hr, which will have some info like number of nodes, node type, kubernetes version etc.

This metric is cotrolled by OPENEBS_IO_ENABLE_ANALYTICS env. We can set it to false if we don't want to send the metrics.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-02 23:00:22 +05:30
Pawan
0fc86d843b chore(doc): updating readme with snapshot and clone details
Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-01 00:30:22 +05:30
Aman Gupta
5922ebe038
add(doc): Adding the list of e2e test cases (#50)
Added the list of automated and manual e2e test cases specific to zfs-localpv 

Signed-off-by: Aman Gupta <aman.gupta@mayadata.io>
2020-02-28 22:33:22 +05:30
prateekpandey14
c2f22025b0 fix(operator): update provisioner image to support snapshot datasource
changes fix the zfs operator yaml with 1.5.0 csi-provisioner
image to support volumesnapshot as datasource type to
create clone volumes.

Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
2020-02-14 00:10:07 +05:30
Pawan
638d8ae4e4 chore(doc): adding v0.4 changelog in the repo
Signed-off-by: Pawan <pawan@mayadata.io>
2020-02-13 14:51:44 +05:30
Pawan
48b7a02ccd refactor(version): bumping the version to 0.5
Signed-off-by: Pawan <pawan@mayadata.io>
2020-02-13 14:51:24 +05:30
Pawan Prakash Sharma
287606b78a
feat(zfspv): adding snapshot and clone support for ZFSPV (#39)
This commits support snapshot and clone commands via CSI driver. User can create snap and clone using the following steps. 

Note:
- Snapshot is created via reconciliation CR
- Cloned volume will be on the same zpool where the snapshot is taken
- Cloned volume will have same properties as source volume. 

-----------------------------------
Create a Snapshotclass
```
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
  name: zfspv-snapclass
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: zfs.csi.openebs.io
deletionPolicy: Delete
```
Once snapshotclass is created, we can use this class to create a Snapshot 
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: zfspv-snap
spec:
  volumeSnapshotClassName: zfspv-snapclass
  source:
    persistentVolumeClaimName: csi-zfspv
```
```
$ kubectl get volumesnapshot
NAME          AGE
zfspv-snap    7m52s
```
```
$ kubectl get volumesnapshot -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
  kind: VolumeSnapshot
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
    - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
    generation: 1
    name: zfspv-snap
    namespace: default
    resourceVersion: "30040"
    selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/zfspv-snap
    uid: 1a5cf166-c599-4f58-9f3c-f1148be47fca
  spec:
    source:
      persistentVolumeClaimName: csi-zfspv
    volumeSnapshotClassName: zfspv-snapclass
  status:
    boundVolumeSnapshotContentName: snapcontent-1a5cf166-c599-4f58-9f3c-f1148be47fca
    creationTime: "2020-01-30T10:31:24Z"
    readyToUse: true
    restoreSize: "0"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```


Openebs resource for the created snapshot 
```
$ kubectl get snap -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: ZFSSnapshot
  metadata:
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 2
    labels:
      kubernetes.io/nodename: pawan-2
      openebs.io/persistent-volume: pvc-18cab7c3-ec5e-4264-8507-e6f7df4c789a
    name: snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    namespace: openebs
    resourceVersion: "30035"
    selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    uid: e29d571c-42b5-4fb7-9110-e1cfc9b96641
  spec:
    capacity: "4294967296"
    fsType: zfs
    ownerNodeID: pawan-2
    poolName: zfspv-pool
    status: Ready
    volumeType: DATASET
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```

Create a clone volume
    
 We can provide a datasource as snapshot name to create a clone volume
    
```yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: zfspv-clone
    spec:
      storageClassName: openebs-zfspv
      dataSource:
        name: zfspv-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi
```
It will create a ZFS clone volume from the mentioned snapshot and create the PV on the same node where original volume is there.
    
Here, As resize is not supported yet, the clone PVC size should match the size of the snapshot.
Also, all the properties from the storageclass will not be considered for the clone case, it will take the properties from the snapshot and create the clone volume. One thing to note here is that, the storageclass in clone PVC should have the same poolname as that of the original volume as across the pool, clone is not supported.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-02-13 13:31:17 +05:30
Pawan
b0434bb537 fix(zfspv): do not destroy the dataset with -R option
With "zfs destroy -R" we will delete snapshot and clones also. We should
not use that for deleting the volumes.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-31 13:12:57 +05:30
Aman Gupta
d826d1dcb8 fix(doc): Resolving the typo error in README doc
Signed-off-by: Aman Gupta <aman.gupta@mayadata.io>
2020-01-28 12:32:40 +05:30
Pawan
896a6032ca chore(metrics): adding list of zfs metrics exposed by prometheus
Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-17 14:40:42 +05:30
Pawan
e3467120fb refactor(version): bumping the version to 0.4
also adding more descriptive version log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-17 14:39:46 +05:30
Pawan
784349cca1 chore(doc): adding v0.3 changelog in the repo
Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-16 23:44:02 +05:30
Pawan Prakash Sharma
0b56f0ae53 feat(alert): adding sample prometheus rules for ZFSPV (#32)
Provide sample instructions on setting up prometheus via prometheus-operator and then configuring a sample rule to monitor the volume space utilization, and once available space is less than 10%, it will start firing the alert.

```
 100 * kubelet_volume_stats_available_bytes{job="kubelet"}
          /
        kubelet_volume_stats_capacity_bytes{job="kubelet"}
          < 10
```

Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-09 23:10:13 +05:30
Pawan
7094c48a8f feat(HA): adding antiaffinity in the controller deployment
so that no two pods get scheduled on the same node. Also keeping
the default replica to 1, if HA feature is required, we can change
replica count to 2(or more).

Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-06 19:13:53 +05:30
Pawan
4689c21cb4 feat(HA): adding support to have controller in HA
We can have more than one controller in the system, but only one will
be the master and others will be slave. Once master is down, one of the slave will
take over via lease mechanism and start provisioning/deprovisioning the volumes.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-01-06 19:13:53 +05:30
Pawan
dfe4631835 chore(doc): adding faq.md for ZFSPV
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:43:04 +05:30
Pawan
5c992a5ba4 chore(doc): adding contributing doc
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:43:04 +05:30
Pawan
820d0800cd feat(volstats): return volstats for path if it is a mountpath
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:39:35 +05:30
Pawan
1e5c81d2ac feat(volstats): adding client side fs stats
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:39:35 +05:30
Pawan
754755439b test(zfspv): adding zfs property update test cases
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-30 18:37:28 +05:30
Pawan
620be59016 chore(doc): adding badges for the ZFSPV repository
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-24 09:15:08 +05:30
Pawan
72bc0b0057 chore(doc): making zfs-localpv repository CNCF compatible
adding MAINTAINER, CODE_OF_CONDUCT.md and GOVERNANCE.md files.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-11 15:44:26 +05:30
Pawan Prakash Sharma
8078791cc5 chor(doc): adding roadmap in the README (#25)
* chor(doc): adding roadmap in the README

Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-11 15:43:28 +05:30
Pawan Prakash Sharma
1c5d656635 chore(doc): adding changelog in the repo (#23)
* chore(doc): adding changelog in the repo
* chore(doc): adding v0.2 change log

Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-10 11:49:40 +05:30
Pawan
1ce53690f8 test(zfspv): making test cases to run on forked repo
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-06 09:47:11 +05:30
Pawan
c3c5eb1794 test(zfspv): vendor for ginkgo test code
Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-04 13:17:04 +05:30
Pawan
d933b47c75 test(zfspv): minikube setup for travis
to run integration test cases

Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-04 13:17:04 +05:30
Pawan
7ab6156b98 fix(zfspv): changing image pull policy to IfNotPresent
to make it not pull the image all the time. Also, it needed
so that while doing integration test, it uses the local image
we just build, instead of fetching the image from the dockerhub or quay
so that we can run ci on the locally built image.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-12-04 13:14:32 +05:30
Pawan
523e862159 refactor(zfspv): renamed watcher to mgmt package
as it does the management task also corrected few logs
and renamed zvol to zfs(as we support zvol and dataset both)

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-26 21:38:32 +05:30
Pawan
e953af99cf fix(yaml): fixing mongo yaml
As the selector is needed in the latest kubetnetes cluster
Also updated zfs volume custom resource and renamed few
fields of percona application.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-25 18:46:02 +05:30
Pawan
0b7229a573 chore(doc): updating readme with latest details
Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-22 17:17:36 +05:30
Pawan Prakash Sharma
68db6d2774 feat(ZFSPV): adding support for applications to create "zfs" flesystem (#15)
Application can now create a storageclass to create zfs filesystem

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv5
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

ZFSPV was supporting ext2/3/4 and xfs filesystem only which
adds one extra filesystem layer on top of ZFS filesystem. So now
we can driectly write to the ZFS filesystem and get the optimal performance
by directly creating ZFS filesystem for storage.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-21 19:00:15 +05:30
Akhil Mohan
4ffd857191 chore(README): fix scheduling algorithm doc
fix scheduling algorithm doc and explain how the scheduling is
done currently. Also included the steps to make use of
kubernetes scheduler instead of the scheduler in zfs-localpv

Signed-off-by: Akhil Mohan <akhil.mohan@mayadata.io>
2019-11-14 18:56:09 +05:30
Pawan
69d72bd64e fix(bug): fixed a typo for thinprovision json name.
Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-08 18:26:36 +05:30
Pawan
66cd525bab fix(doc): updating sample ZFSVolume CR
Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-08 15:51:06 +05:30
Pawan
e174830eef remove unnecessary deploy from travis
Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-07 20:13:11 +05:30
Pawan
a863a518a3 chore(doc): updating readme with latest details
Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-07 16:08:58 +05:30
Pawan
57b3acf079 feat(ZFSPV): adding xfs filesystem support for zfs-localpv
Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-06 22:02:51 +05:30
Pawan Prakash Sharma
a10dedbd5e feat(ZFSPV): volume count based scheduler for ZFSPV (#8)
This is an initial scheduler implementation for ZFS Local PV. 

* adding scheduler as a configurable option
* adding volumeWeightedScheduler as scheduling logic

The volumeWeightedScheduler  will go through all the nodes as per
topology information and it will pick the node which has less
volume provisioned in the given pool.

lets say there are 2 nodes node1 and node2 with below pool configuration :-
```
node1
|
|-----> pool1
|         |
|         |------> pvc1
|         |------> pvc2
|-----> pool2
          |------> pvc3

node2
|
|-----> pool1
|         |
|         |------> pvc4
|-----> pool2
          |------> pvc5
          |------> pvc6
```
So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool1"
```

So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "pool2"
```
In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-06 21:20:49 +05:30
Pawan Prakash Sharma
d0e97cddb2 adding topology support for zfspv (#7)
This PR adds support to allow the CSI driver to pick up a node matching the  topology specified in the storage class. Admin can specify allowedTopologies in the StorageClass to specify the nodes where the zfs pools are setup

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "zfspv-pool"
provisioner: zfs-localpv
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - gke-zfspv-pawan-default-pool-c8929518-cgd4
      - gke-zfspv-pawan-default-pool-c8929518-dxzc
```

Note: This PR picks up the first node from the list of nodes available.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-01 06:46:04 +05:30
Pawan Prakash Sharma
0218dacea0 feat(ZFSPV): adding encryption in ZFSVolume CR (#6)
Adding support for enabling encryption using a custom key. 

Also, adding support to inherit the properties from ZPOOL
which are not listed in the storage class, ZFS driver will
not pass default values while creating the volume. Those
properties will be inherited from the ZPOOL.

we can use the encryption option in storage class 
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  encryption: "on"
  keyformat: "raw"
  keylocation: "file:///home/keys/key"
  poolname: "zfspv-pool"
provisioner: openebs.io/zfs
```

Just a note, the key file should be mounted inside the node-agent container so that we can use that file while provisioning the volume. keyformat can be raw, hex or passphrase.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-10-15 22:51:48 +05:30
fossabot
cc6ff6c520 Add license scan report and status
Signed-off-by: fossabot <badges@fossa.io>
2019-09-26 07:53:46 +05:30
Pawan
37888725d9 moving supported system near pre-req
Signed-off-by: Pawan <pawan@mayadata.io>
2019-09-20 19:59:06 +05:30
Pawan
0ca8141f0f chore(doc): updating README with volume property usage
Signed-off-by: Pawan <pawan@mayadata.io>
2019-09-20 19:59:06 +05:30
Pawan
b33542eee2 bug(zfspv): not able to deploy on rancher with ZFS 0.8.
The ZFS 0.8 has dependency on libcrypto.so.1.1 which in turn
requires GLIBC_2.25 supported by the system. Changed the docker
image to 18:04 as 16:04 has glibc version 2.23.

Also updated the README with the supported system details.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-09-19 21:26:26 +05:30
Pawan
eee591257c chore(doc): adding README for ZFSPV
Signed-off-by: Pawan <pawan@mayadata.io>
2019-09-18 22:42:03 +05:30