zfs-localpv/pkg/mgmt/restore/doc.go

50 lines
2.1 KiB
Go
Raw Normal View History

feat(zfspv): adding backup and restore support (#162) This commit adds support for Backup and Restore controller, which will be watching for the events. The velero plugin will create a Backup CR to create a backup with the remote location information, the controller will send the data to that remote location. In the same way, the velero plugin will create a Restore CR to restore the volume from the the remote location and the restore controller will restore the data. Steps to use velero plugin for ZFS-LocalPV are : 1. install velero 2. add openebs plugin velero plugin add openebs/velero-plugin:latest 3. Create the volumesnapshot location : for full backup :- ```yaml apiVersion: velero.io/v1 kind: VolumeSnapshotLocation metadata: name: default namespace: velero spec: provider: openebs.io/zfspv-blockstore config: bucket: velero prefix: zfs namespace: openebs provider: aws region: minio s3ForcePathStyle: "true" s3Url: http://minio.velero.svc:9000 ``` for incremental backup :- ```yaml apiVersion: velero.io/v1 kind: VolumeSnapshotLocation metadata: name: default namespace: velero spec: provider: openebs.io/zfspv-blockstore config: bucket: velero prefix: zfs backup: incremental namespace: openebs provider: aws region: minio s3ForcePathStyle: "true" s3Url: http://minio.velero.svc:9000 ``` 4. Create backup velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default 5. Create Schedule velero create schedule newschedule --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default 6. Restore from backup velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1 Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-08 13:44:39 +05:30
/*
Copyright 2020 The OpenEBS Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
The restore flow is as follows:
- plugin creates a restore storage volume(zvol or dataset)
At the backup time, the plugin backs up the ZFSVolume CR and while doing the restore we have all the information related to that volume. The plugin first creates the restore destination to store the data.
- plugin then creates the ZFSRestore CR with the destination volume and remote location as its server information from where the data will be read for restore purpose.
- restore controller (on node) keeps a watch for new CRs associated with the node id. This node ID will be same as the Node ID present in the ZFSVolume resource.
- if Restore status == init and not marked for deletion, Restore controller will execute the `remote-read | zfs recv` command.
Limitation with the Initial Version :-
- The destination cluster should have same node ID and Zpool present.
- If volume was thick provisioned, then destination Zpool should have enough space for that volume.
- destination volume should be present before starting the Restore Operation.
- If the restore fails due to network issues and
* the status update succeed, the Restore will not be re-attempted.
* the status update fails, the Restore will be re-attempted from the beginning (TODO optimize it).
- If the restore doesn't have the specified backup, the plugin itself fails that restore request as there is no Backup to Restore from.
- If the same volume is restored twice, the data will be written again. The plugin itself fails this kind of request.
*/
package restore