Commit graph

17 commits

Author SHA1 Message Date
Pawan Prakash Sharma
1b30116e5f
feat(migration): adding support to migrate the PV to a new node (#304)
Usecase: A node in the Kubernetes cluster is replaced with a new node. The 
new node gets a different `kubernetes.io/hostname`. The storage devices
that were attached to the old node are re-attached to the new node. 

Fix: Instead of using the default `kubenetes.io/hostname` as the node affinity 
label, this commit changes to use `openebs.io/nodeid`. The ZFS LocalPV driver 
will pick the value from the nodes and set the affinity.

Once the old node is removed from the cluster, the K8s scheduler will continue 
to schedule applications on the old node only.

User can now modify the value of `openebs.io/nodeid` on the new node to the same
value that was available on the old node. This will make sure the pods/volumes are 
scheduled to the node now. 


Note: Now to migrate the PV to the other node, we have to move the disks to the other node
and remove the old node from the cluster and set the same label on the new node using
the same key, which will let k8s scheduler to schedule the pods to that node.

Other updates: 
* adding faq doc
* renaming the config variable to nodename

Signed-off-by: Pawan <pawan@mayadata.io>
Co-authored-by: Akhil Mohan <akhilerm@gmail.com>

* Update docs/faq.md

Co-authored-by: Akhil Mohan <akhilerm@gmail.com>
2021-05-01 19:05:01 +05:30
Prateek Pandey
b1aa6ab51a
refact(deps): bump k8s and client-go deps to version v0.20.2 (#294)
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2021-03-31 16:43:42 +05:30
Pawan Prakash Sharma
6ec49df225
fix(restore): adding support to restore in an encrypted pool (#292)
Encrypted pool does not allow the volume to be pre created for the
restore purpose. Here changing the design to do the restore first
and then create the ZFSVolume object which will bind the volume
already created while doing restore.


Signed-off-by: Pawan <pawan@mayadata.io>
2021-03-01 23:56:42 +05:30
Shubham Bajpai
2906d39d94
refact(csi): use common lib-csi imports (#263)
Signed-off-by: shubham <shubham.bajpai@mayadata.io>
2020-12-18 21:12:52 +05:30
Pawan Prakash Sharma
e40026c98a
feat(zfspv): adding backup and restore support (#162)
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.

In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.

Steps to use velero plugin for ZFS-LocalPV are :

1. install velero

2. add openebs plugin

velero plugin add openebs/velero-plugin:latest

3. Create the volumesnapshot location :

for full backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

for incremental backup :-

```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  name: default
  namespace: velero
spec:
  provider: openebs.io/zfspv-blockstore
  config:
    bucket: velero
    prefix: zfs
    backup: incremental
    namespace: openebs
    provider: aws
    region: minio
    s3ForcePathStyle: "true"
    s3Url: http://minio.velero.svc:9000
```

4. Create backup

velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default

5. Create Schedule

velero create schedule newschedule  --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default

6. Restore from backup

velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1



Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
Pawan
27065bf40a feat(shared): adding shared mount support ZFSPV volumes
Applications who want to share a volume can use below storageclass
to make their volumes shared by multiple pods

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
parameters:
  shared: "yes"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```

Now the provisioned volume using this storageclass can be used by multiple pods.
Here pods have to make sure of the data consistency and have to have locking mechanism.
One thing to note here is pods will be scheduled to the node where volume is present
so that all the pods can use the same volume as they can access it locally only.
This was we can avoid the NFS overhead and can get the optimal performance also.

Also fixed the log formatting in the GRPC log.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-07-02 00:40:15 +05:30
wiwen
f5ae3ff476
chore(go-lint): fix golint warning (#133)
Fixes several go lint cases reported by go report. 

Signed-off-by: wiwen <shenggxhz@gmail.com>
2020-06-09 14:47:23 +05:30
Pawan
472fd603ac feat(beta): adding v1 CRD for ZFS-LocalPV
Moving the CRDs to stable v1 version.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-06-04 16:02:32 +05:30
Pawan
25d1f1a413 feat(zfspv): pvc should be bound only if volume has been created.
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.

The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.

Signed-off-by: Pawan <pawan@mayadata.io>
2020-05-21 08:49:57 +05:30
Prateek Pandey
6033789c17
feat(crd-gen): automate the CRDs generation with validations for APIs (#75)
- To generate the CRD spec `make manifest` generate then under
  deploy/yamls directory
- added a update-crd script to automate the steps to generate
  CRDs and its validation of each types

Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
2020-04-01 17:54:20 +05:30
Pawan Prakash Sharma
c4c2278d2f
refactor(crd): move CR from openebs.io to zfs.openebs.io (#70)
Changed the group name from openebs.io to zfs.openebs.io.

Now ZFS Volume CR will look like this : 
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfszvolumes.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: zfsvolumes
    singular: zfsvolume
    kind:ZFSVolume
    shortNames:
    - zfsvol
    - zv
```

Snapshot CR will look like this :
```
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: zfssnapshots.zfs.openebs.io
spec:
  group: zfs.openebs.io
  version: v1alpha1
  scope: Namespaced
  names:
    plural: fssnapshots
    singular: zfssnapshot
    kind: ZFSSnapshot
    shortNames:
    - zfssnapshot
    - zfssnap

```


Signed-off-by: Pawan <pawan@mayadata.io>
2020-03-30 22:12:34 +05:30
Pawan Prakash Sharma
287606b78a
feat(zfspv): adding snapshot and clone support for ZFSPV (#39)
This commits support snapshot and clone commands via CSI driver. User can create snap and clone using the following steps. 

Note:
- Snapshot is created via reconciliation CR
- Cloned volume will be on the same zpool where the snapshot is taken
- Cloned volume will have same properties as source volume. 

-----------------------------------
Create a Snapshotclass
```
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
  name: zfspv-snapclass
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: zfs.csi.openebs.io
deletionPolicy: Delete
```
Once snapshotclass is created, we can use this class to create a Snapshot 
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: zfspv-snap
spec:
  volumeSnapshotClassName: zfspv-snapclass
  source:
    persistentVolumeClaimName: csi-zfspv
```
```
$ kubectl get volumesnapshot
NAME          AGE
zfspv-snap    7m52s
```
```
$ kubectl get volumesnapshot -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
  kind: VolumeSnapshot
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
    - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
    generation: 1
    name: zfspv-snap
    namespace: default
    resourceVersion: "30040"
    selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/zfspv-snap
    uid: 1a5cf166-c599-4f58-9f3c-f1148be47fca
  spec:
    source:
      persistentVolumeClaimName: csi-zfspv
    volumeSnapshotClassName: zfspv-snapclass
  status:
    boundVolumeSnapshotContentName: snapcontent-1a5cf166-c599-4f58-9f3c-f1148be47fca
    creationTime: "2020-01-30T10:31:24Z"
    readyToUse: true
    restoreSize: "0"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```


Openebs resource for the created snapshot 
```
$ kubectl get snap -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: ZFSSnapshot
  metadata:
    creationTimestamp: "2020-01-30T10:31:24Z"
    finalizers:
    - zfs.openebs.io/finalizer
    generation: 2
    labels:
      kubernetes.io/nodename: pawan-2
      openebs.io/persistent-volume: pvc-18cab7c3-ec5e-4264-8507-e6f7df4c789a
    name: snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    namespace: openebs
    resourceVersion: "30035"
    selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
    uid: e29d571c-42b5-4fb7-9110-e1cfc9b96641
  spec:
    capacity: "4294967296"
    fsType: zfs
    ownerNodeID: pawan-2
    poolName: zfspv-pool
    status: Ready
    volumeType: DATASET
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
```

Create a clone volume
    
 We can provide a datasource as snapshot name to create a clone volume
    
```yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: zfspv-clone
    spec:
      storageClassName: openebs-zfspv
      dataSource:
        name: zfspv-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi
```
It will create a ZFS clone volume from the mentioned snapshot and create the PV on the same node where original volume is there.
    
Here, As resize is not supported yet, the clone PVC size should match the size of the snapshot.
Also, all the properties from the storageclass will not be considered for the clone case, it will take the properties from the snapshot and create the clone volume. One thing to note here is that, the storageclass in clone PVC should have the same poolname as that of the original volume as across the pool, clone is not supported.


Signed-off-by: Pawan <pawan@mayadata.io>
2020-02-13 13:31:17 +05:30
Pawan
523e862159 refactor(zfspv): renamed watcher to mgmt package
as it does the management task also corrected few logs
and renamed zvol to zfs(as we support zvol and dataset both)

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-26 21:38:32 +05:30
Pawan Prakash Sharma
68db6d2774 feat(ZFSPV): adding support for applications to create "zfs" flesystem (#15)
Application can now create a storageclass to create zfs filesystem

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv5
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  fstype: "zfs"
  poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io

ZFSPV was supporting ext2/3/4 and xfs filesystem only which
adds one extra filesystem layer on top of ZFS filesystem. So now
we can driectly write to the ZFS filesystem and get the optimal performance
by directly creating ZFS filesystem for storage.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-21 19:00:15 +05:30
Pawan Prakash Sharma
d0e97cddb2 adding topology support for zfspv (#7)
This PR adds support to allow the CSI driver to pick up a node matching the  topology specified in the storage class. Admin can specify allowedTopologies in the StorageClass to specify the nodes where the zfs pools are setup

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  poolname: "zfspv-pool"
provisioner: zfs-localpv
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - gke-zfspv-pawan-default-pool-c8929518-cgd4
      - gke-zfspv-pawan-default-pool-c8929518-dxzc
```

Note: This PR picks up the first node from the list of nodes available.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-01 06:46:04 +05:30
Pawan Prakash Sharma
0218dacea0 feat(ZFSPV): adding encryption in ZFSVolume CR (#6)
Adding support for enabling encryption using a custom key. 

Also, adding support to inherit the properties from ZPOOL
which are not listed in the storage class, ZFS driver will
not pass default values while creating the volume. Those
properties will be inherited from the ZPOOL.

we can use the encryption option in storage class 
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  blocksize: "4k"
  compression: "on"
  dedup: "on"
  thinprovision: "yes"
  encryption: "on"
  keyformat: "raw"
  keylocation: "file:///home/keys/key"
  poolname: "zfspv-pool"
provisioner: openebs.io/zfs
```

Just a note, the key file should be mounted inside the node-agent container so that we can use that file while provisioning the volume. keyformat can be raw, hex or passphrase.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-10-15 22:51:48 +05:30
Pawan
9f5cf445df feat(zfs-localpv): initial commit
provisioning and deprovisioning of
the volumes on the node where zfs pool
has already been setup. Pool name and the volume
parameters has to be given in storage class
which will be used to provision the volume.

Signed-off-by: Pawan <pawan@mayadata.io>
2019-09-18 08:44:08 +05:30