Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.
The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.
Signed-off-by: Pawan <pawan@mayadata.io>
This PR adds the capability to create the Clone from pvc directly
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-clone
spec:
storageClassName: openebs-snap
dataSource:
name: pvc-snap
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
```
The ZFS_LocalPV driver will create one internal snapshot of the name
same as the new volume name and will create a clone out of it. Also,
while destroying the volume the driver will take care of deleting
the created snapshot for the clone.
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.
In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.
Steps to use velero plugin for ZFS-LocalPV are :
1. install velero
2. add openebs plugin
velero plugin add openebs/velero-plugin:latest
3. Create the volumesnapshot location :
for full backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
for incremental backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
backup: incremental
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
4. Create backup
velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default
5. Create Schedule
velero create schedule newschedule --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default
6. Restore from backup
velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1
Signed-off-by: Pawan <pawan@mayadata.io>
The controller does not check whether the volume has been created or not
and return successful. Which in turn binds the pvc to the pv.
The PVC should not bound until corresponding zfs volume has been created.
Now controller will check the ZFSVolume CR state to be "Ready" before returning
successful. The CSI will retry the CreateVolume request when it will get
a error reply and when the ZFS node agent creates the ZFS volume and sets the
ZFSVolume CR state to be "Ready", the controller will return success for the
CreateVolume Request and then PVC will be bound.
Signed-off-by: Pawan <pawan@mayadata.io>
There are setups where nodename is different than the hostname.
The driver uses the nodename and tries to set the "kubernetes.io/hostname"
node label to the nodename. Which will fail if nodename is not same as
hostname. Here, changing the key to unique name so that the driver can set
that key as node label and also it can not modify/touch the existing node labels.
Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity.
Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as
after the PR https://github.com/openebs/zfs-localpv/pull/94, it supports all the node
labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So
old volumes will work without any issue. Also for the same reason old stoarge classes
which are using "kubernetes.io/hostname" as topology key, will work as that key is supported.
This fixes the issue where the driver was trying to create the PV on the master node
as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid
candidate for provisioning the PV. After changing the key to unique name, since the driver
will not run on master node, so it will not set "openebs.io/nodename" label to this node
hence this node will never become a valid candidate for the provisioning the volume.
Signed-off-by: Pawan <pawan@mayadata.io>
We can resize the volume by updating the PVC yaml to
the desired size and apply it. The ZFS Driver will take care
of updating the quota in case of dataset. If we are using a
Zvol and have mounted it as ext4 or xfs filesystem, the driver will take
care of expanding the volume via reize2fs/xfs_growfs binaries.
For resize, storageclass that provisions the pvc must suppo
rt resize. We should have allowVolumeExpansion as true in storageclass
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
allowVolumeExpansion: true
parameters:
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
```
Signed-off-by: Pawan <pawan@mayadata.io>
Whenever a volume is provisioned and de-provisioned we will send a google event with mainly following details :
1. pvName (will shown as app title in google analytics)
2. size of the volume
3. event type : volume-provision, volume-deprovision
4. storage type zfs-localpv
5. replicacount as 1
6. ClientId as default namespace uuid
Apart from this, we send the event once in 24 hr, which will have some info like number of nodes, node type, kubernetes version etc.
This metric is cotrolled by OPENEBS_IO_ENABLE_ANALYTICS env. We can set it to false if we don't want to send the metrics.
Signed-off-by: Pawan <pawan@mayadata.io>
This commits support snapshot and clone commands via CSI driver. User can create snap and clone using the following steps.
Note:
- Snapshot is created via reconciliation CR
- Cloned volume will be on the same zpool where the snapshot is taken
- Cloned volume will have same properties as source volume.
-----------------------------------
Create a Snapshotclass
```
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
name: zfspv-snapclass
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: zfs.csi.openebs.io
deletionPolicy: Delete
```
Once snapshotclass is created, we can use this class to create a Snapshot
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: zfspv-snap
spec:
volumeSnapshotClassName: zfspv-snapclass
source:
persistentVolumeClaimName: csi-zfspv
```
```
$ kubectl get volumesnapshot
NAME AGE
zfspv-snap 7m52s
```
```
$ kubectl get volumesnapshot -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
creationTimestamp: "2020-01-30T10:31:24Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
- snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
generation: 1
name: zfspv-snap
namespace: default
resourceVersion: "30040"
selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/zfspv-snap
uid: 1a5cf166-c599-4f58-9f3c-f1148be47fca
spec:
source:
persistentVolumeClaimName: csi-zfspv
volumeSnapshotClassName: zfspv-snapclass
status:
boundVolumeSnapshotContentName: snapcontent-1a5cf166-c599-4f58-9f3c-f1148be47fca
creationTime: "2020-01-30T10:31:24Z"
readyToUse: true
restoreSize: "0"
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
Openebs resource for the created snapshot
```
$ kubectl get snap -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
kind: ZFSSnapshot
metadata:
creationTimestamp: "2020-01-30T10:31:24Z"
finalizers:
- zfs.openebs.io/finalizer
generation: 2
labels:
kubernetes.io/nodename: pawan-2
openebs.io/persistent-volume: pvc-18cab7c3-ec5e-4264-8507-e6f7df4c789a
name: snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
namespace: openebs
resourceVersion: "30035"
selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
uid: e29d571c-42b5-4fb7-9110-e1cfc9b96641
spec:
capacity: "4294967296"
fsType: zfs
ownerNodeID: pawan-2
poolName: zfspv-pool
status: Ready
volumeType: DATASET
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
Create a clone volume
We can provide a datasource as snapshot name to create a clone volume
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zfspv-clone
spec:
storageClassName: openebs-zfspv
dataSource:
name: zfspv-snap
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
```
It will create a ZFS clone volume from the mentioned snapshot and create the PV on the same node where original volume is there.
Here, As resize is not supported yet, the clone PVC size should match the size of the snapshot.
Also, all the properties from the storageclass will not be considered for the clone case, it will take the properties from the snapshot and create the clone volume. One thing to note here is that, the storageclass in clone PVC should have the same poolname as that of the original volume as across the pool, clone is not supported.
Signed-off-by: Pawan <pawan@mayadata.io>
This is an initial scheduler implementation for ZFS Local PV.
* adding scheduler as a configurable option
* adding volumeWeightedScheduler as scheduling logic
The volumeWeightedScheduler will go through all the nodes as per
topology information and it will pick the node which has less
volume provisioned in the given pool.
lets say there are 2 nodes node1 and node2 with below pool configuration :-
```
node1
|
|-----> pool1
| |
| |------> pvc1
| |------> pvc2
|-----> pool2
|------> pvc3
node2
|
|-----> pool1
| |
| |------> pvc4
|-----> pool2
|------> pvc5
|------> pvc6
```
So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
blocksize: "4k"
compression: "on"
dedup: "on"
thinprovision: "yes"
poolname: "pool1"
```
So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-zfspv
provisioner: zfs.csi.openebs.io
parameters:
blocksize: "4k"
compression: "on"
dedup: "on"
thinprovision: "yes"
poolname: "pool2"
```
In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that.
Signed-off-by: Pawan <pawan@mayadata.io>
This PR adds support to allow the CSI driver to pick up a node matching the topology specified in the storage class. Admin can specify allowedTopologies in the StorageClass to specify the nodes where the zfs pools are setup
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
allowVolumeExpansion: true
parameters:
blocksize: "4k"
compression: "on"
dedup: "on"
thinprovision: "yes"
poolname: "zfspv-pool"
provisioner: zfs-localpv
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- gke-zfspv-pawan-default-pool-c8929518-cgd4
- gke-zfspv-pawan-default-pool-c8929518-dxzc
```
Note: This PR picks up the first node from the list of nodes available.
Signed-off-by: Pawan <pawan@mayadata.io>
provisioning and deprovisioning of
the volumes on the node where zfs pool
has already been setup. Pool name and the volume
parameters has to be given in storage class
which will be used to provision the volume.
Signed-off-by: Pawan <pawan@mayadata.io>