Currently controller picks one node and the node agent keeps on trying to
create the volume on that node. There might not be enough space available
on that node to create the volume.
The controller can try on all the nodes sequentially and fail
the request if volume creation fails on all the nodes which satisfies the
topology contraints.
Signed-off-by: Pawan <pawan@mayadata.io>
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.
In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.
Steps to use velero plugin for ZFS-LocalPV are :
1. install velero
2. add openebs plugin
velero plugin add openebs/velero-plugin:latest
3. Create the volumesnapshot location :
for full backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
for incremental backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
backup: incremental
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
4. Create backup
velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default
5. Create Schedule
velero create schedule newschedule --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default
6. Restore from backup
velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1
Signed-off-by: Pawan <pawan@mayadata.io>
- To generate the CRD spec `make manifest` generate then under
deploy/yamls directory
- added a update-crd script to automate the steps to generate
CRDs and its validation of each types
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
This commits support snapshot and clone commands via CSI driver. User can create snap and clone using the following steps.
Note:
- Snapshot is created via reconciliation CR
- Cloned volume will be on the same zpool where the snapshot is taken
- Cloned volume will have same properties as source volume.
-----------------------------------
Create a Snapshotclass
```
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
name: zfspv-snapclass
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: zfs.csi.openebs.io
deletionPolicy: Delete
```
Once snapshotclass is created, we can use this class to create a Snapshot
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: zfspv-snap
spec:
volumeSnapshotClassName: zfspv-snapclass
source:
persistentVolumeClaimName: csi-zfspv
```
```
$ kubectl get volumesnapshot
NAME AGE
zfspv-snap 7m52s
```
```
$ kubectl get volumesnapshot -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"snapshot.storage.k8s.io/v1beta1","kind":"VolumeSnapshot","metadata":{"annotations":{},"name":"zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
creationTimestamp: "2020-01-30T10:31:24Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
- snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
generation: 1
name: zfspv-snap
namespace: default
resourceVersion: "30040"
selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/zfspv-snap
uid: 1a5cf166-c599-4f58-9f3c-f1148be47fca
spec:
source:
persistentVolumeClaimName: csi-zfspv
volumeSnapshotClassName: zfspv-snapclass
status:
boundVolumeSnapshotContentName: snapcontent-1a5cf166-c599-4f58-9f3c-f1148be47fca
creationTime: "2020-01-30T10:31:24Z"
readyToUse: true
restoreSize: "0"
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
Openebs resource for the created snapshot
```
$ kubectl get snap -n openebs -o yaml
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
kind: ZFSSnapshot
metadata:
creationTimestamp: "2020-01-30T10:31:24Z"
finalizers:
- zfs.openebs.io/finalizer
generation: 2
labels:
kubernetes.io/nodename: pawan-2
openebs.io/persistent-volume: pvc-18cab7c3-ec5e-4264-8507-e6f7df4c789a
name: snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
namespace: openebs
resourceVersion: "30035"
selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-1a5cf166-c599-4f58-9f3c-f1148be47fca
uid: e29d571c-42b5-4fb7-9110-e1cfc9b96641
spec:
capacity: "4294967296"
fsType: zfs
ownerNodeID: pawan-2
poolName: zfspv-pool
status: Ready
volumeType: DATASET
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
Create a clone volume
We can provide a datasource as snapshot name to create a clone volume
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zfspv-clone
spec:
storageClassName: openebs-zfspv
dataSource:
name: zfspv-snap
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
```
It will create a ZFS clone volume from the mentioned snapshot and create the PV on the same node where original volume is there.
Here, As resize is not supported yet, the clone PVC size should match the size of the snapshot.
Also, all the properties from the storageclass will not be considered for the clone case, it will take the properties from the snapshot and create the clone volume. One thing to note here is that, the storageclass in clone PVC should have the same poolname as that of the original volume as across the pool, clone is not supported.
Signed-off-by: Pawan <pawan@mayadata.io>
as it does the management task also corrected few logs
and renamed zvol to zfs(as we support zvol and dataset both)
Signed-off-by: Pawan <pawan@mayadata.io>