* support for multi arch container image via github actions
* suffix amd64 arch tag in zfs driver image
Signed-off-by: prateekpandey14 <prateek.pandey@mayadata.io>
This commit adds support for Backup and Restore controller, which will be watching for
the events. The velero plugin will create a Backup CR to create a backup
with the remote location information, the controller will send the data
to that remote location.
In the same way, the velero plugin will create a Restore CR to restore the
volume from the the remote location and the restore controller will restore
the data.
Steps to use velero plugin for ZFS-LocalPV are :
1. install velero
2. add openebs plugin
velero plugin add openebs/velero-plugin:latest
3. Create the volumesnapshot location :
for full backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
for incremental backup :-
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: default
namespace: velero
spec:
provider: openebs.io/zfspv-blockstore
config:
bucket: velero
prefix: zfs
backup: incremental
namespace: openebs
provider: aws
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.velero.svc:9000
```
4. Create backup
velero backup create my-backup --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-cloud-default --storage-location=default
5. Create Schedule
velero create schedule newschedule --schedule="*/1 * * * *" --snapshot-volumes --include-namespaces=velero-ns --volume-snapshot-locations=aws-local-default --storage-location=default
6. Restore from backup
velero restore create --from-backup my-backup --restore-volumes=true --namespace-mappings velero-ns:ns1
Signed-off-by: Pawan <pawan@mayadata.io>
We changed the ubuntu docker image to 20.04 in https://github.com/openebs/zfs-localpv/pull/170,
which has issues with formatting the zvol as xfs file system. The filesystem
formatted with xfs is not able to mount with error : "missing codepage or helper program, or other error".
Reverting back to ubuntu 19.10 to fix this issue.
Signed-off-by: Pawan <pawan@mayadata.io>
Now, applications can use the btrfs file system by mentioning "btrfs"
as fstype in the storageclass :-
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-zfspv
parameters:
fstype: "btrfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
Signed-off-by: Pawan <pawan@mayadata.io>
Whenever a volume is provisioned and de-provisioned we will send a google event with mainly following details :
1. pvName (will shown as app title in google analytics)
2. size of the volume
3. event type : volume-provision, volume-deprovision
4. storage type zfs-localpv
5. replicacount as 1
6. ClientId as default namespace uuid
Apart from this, we send the event once in 24 hr, which will have some info like number of nodes, node type, kubernetes version etc.
This metric is cotrolled by OPENEBS_IO_ENABLE_ANALYTICS env. We can set it to false if we don't want to send the metrics.
Signed-off-by: Pawan <pawan@mayadata.io>
The ZFS 0.8 has dependency on libcrypto.so.1.1 which in turn
requires GLIBC_2.25 supported by the system. Changed the docker
image to 18:04 as 16:04 has glibc version 2.23.
Also updated the README with the supported system details.
Signed-off-by: Pawan <pawan@mayadata.io>
provisioning and deprovisioning of
the volumes on the node where zfs pool
has already been setup. Pool name and the volume
parameters has to be given in storage class
which will be used to provision the volume.
Signed-off-by: Pawan <pawan@mayadata.io>