mirror of
https://github.com/TECHNOFAB11/zfs-localpv.git
synced 2025-12-12 06:20:11 +01:00
chore(doc): adding raw block volume details in README
also added detailed upgrade steps. Signed-off-by: Pawan <pawan@mayadata.io>
This commit is contained in:
parent
654f363b5d
commit
34cc65df00
3 changed files with 117 additions and 4 deletions
|
|
@ -511,7 +511,11 @@ Here you can note that this resource has Snapname field which tells that this vo
|
||||||
|
|
||||||
check [resize doc](docs/resize.md).
|
check [resize doc](docs/resize.md).
|
||||||
|
|
||||||
#### 10. Deprovisioning
|
#### 10. Raw Block Volume
|
||||||
|
|
||||||
|
check [raw block volume](docs/raw-block.md).
|
||||||
|
|
||||||
|
#### 11. Deprovisioning
|
||||||
for deprovisioning the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv this volume will also be deleted from the ZFS pool and data will be freed.
|
for deprovisioning the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv this volume will also be deleted from the ZFS pool and data will be freed.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
|
||||||
74
docs/raw-block-volume.md
Normal file
74
docs/raw-block-volume.md
Normal file
|
|
@ -0,0 +1,74 @@
|
||||||
|
There are some specialized applications that require direct access to a block device because, for example, the file system layer introduces unneeded overhead. The most common case is databases, which prefer to organize their data directly on the underlying storage. Raw block devices are also commonly used by any software which itself implements some kind of storage service (software defined storage systems).
|
||||||
|
|
||||||
|
As it becomes more common to run database software and storage infrastructure software inside of Kubernetes, the need for raw block device support in Kubernetes becomes more important.
|
||||||
|
|
||||||
|
To provisione the Raw Block volume, we should create a storageclass without any fstype as Raw block volume does not have any fstype.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: storage.k8s.io/v1
|
||||||
|
kind: StorageClass
|
||||||
|
metadata:
|
||||||
|
name: zfspv-block
|
||||||
|
allowVolumeExpansion: true
|
||||||
|
parameters:
|
||||||
|
poolname: "zfspv-pool"
|
||||||
|
provisioner: zfs.csi.openebs.io
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we can create a pvc with volumeMode as Block to request for a Raw Block Volume :-
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: block-claim
|
||||||
|
spec:
|
||||||
|
volumeMode: Block
|
||||||
|
storageClassName: zfspv-block
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 5Gi
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we can deploy the application using the above PVC, the ZFS-LocalPV driver will attach a Raw block device at the given mount path. We can provide the device path using volumeDevices in the application yaml :-
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: fiob
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
name: fiob
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: fiob
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- resources:
|
||||||
|
name: perfrunner
|
||||||
|
image: openebs/tests-fio
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
command: ["/bin/bash"]
|
||||||
|
args: ["-c", "while true ;do sleep 50; done"]
|
||||||
|
volumeDevices:
|
||||||
|
- devicePath: /dev/xvda
|
||||||
|
name: storage
|
||||||
|
volumes:
|
||||||
|
- name: storage
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: block-claim
|
||||||
|
```
|
||||||
|
|
||||||
|
As requested by application, a Raw block volume will be visible to it at the path /dev/xvda inside the pod.
|
||||||
|
|
||||||
|
```
|
||||||
|
volumeDevices:
|
||||||
|
- devicePath: /dev/xvda
|
||||||
|
name: storage
|
||||||
|
```
|
||||||
|
|
@ -1,4 +1,37 @@
|
||||||
From zfs-driver:v0.6 version ZFS-LocalPV related CRs are now grouped together in its own group called `zfs.openebs.io`. Here steps are mentioned for how to upgrade for refactoring the CRDs. Please do not provision/deprovision any volume during the upgrade.
|
From zfs-driver 0.6 version, the ZFS-LocalPV related CRs are now grouped together in its own group called `zfs.openebs.io`. So if we are using the driver of version less than 0.6 and want to upgrade to 0.6 release or later then we have to follow these steps. If we are already using the ZFS-LocalPV version greater or equal to 0.6 then we just have to apply the yaml from the release branch to upgrade.
|
||||||
|
|
||||||
|
So if my current version is 0.6 and want to upgrade to 0.7, then we can just do this to upgrade :-
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl apply -f https://raw.githubusercontent.com/openebs/zfs-localpv/v0.7.x/deploy/zfs-operator.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
And if my current version is 0.2 and want to upgrade to 0.5, then we can just do this to upgrade :-
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl apply -f https://raw.githubusercontent.com/openebs/zfs-localpv/v0.5.x/deploy/zfs-operator.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
And if my current version is 0.4 and want to upgrade to 0.7, that means we want to upgrade to version greater than 0.6, then we have to follow all the steps mentioned here :-
|
||||||
|
|
||||||
|
*Prerequisite*
|
||||||
|
|
||||||
|
Please do not provision/deprovision any volumes during the upgrade, if we can not control it, then we can scale down the openebs-zfs-controller stateful set to zero replica which will pause all the provisioning/deprovisioning request. And once upgrade is done, the upgraded Driver will continue the provisioning/deprovisioning process.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl edit sts openebs-zfs-controller -n kube-system
|
||||||
|
|
||||||
|
```
|
||||||
|
And set replicas to zero :
|
||||||
|
|
||||||
|
```
|
||||||
|
spec:
|
||||||
|
podManagementPolicy: OrderedReady
|
||||||
|
*replicas: 0*
|
||||||
|
revisionHistoryLimit: 10
|
||||||
|
```
|
||||||
|
|
||||||
|
After this, the controller pod openebs-zfs-controller-x in kube-system namespace will be terminated. Now all the volume provisioning requets will be halted. And it will be recreated as a part of step 3, which will upgrade the image to latest release and also set replicas to the 1(default) which will recreat the controller pod in kube-system and volume provisioning will resume on the upgraded system.
|
||||||
|
|
||||||
steps to upgrade:-
|
steps to upgrade:-
|
||||||
|
|
||||||
|
|
@ -24,10 +57,12 @@ zfssnapshot.zfs.openebs.io/snapshot-f9db91ea-529e-4dac-b2b8-ead045c612da created
|
||||||
Please note that if you have modified the OPENEBS_NAMESPACE env in the driver's deployment to other namespace. Then you have to pass the namespace as an argument to the upgrade.sh script `sh upgrade/upgrash.sh [namespace]`.
|
Please note that if you have modified the OPENEBS_NAMESPACE env in the driver's deployment to other namespace. Then you have to pass the namespace as an argument to the upgrade.sh script `sh upgrade/upgrash.sh [namespace]`.
|
||||||
|
|
||||||
|
|
||||||
3. *upgrade the driver to v0.6*
|
3. *upgrade the driver*
|
||||||
|
|
||||||
|
We can now upgrade the driver to the desired release. For example, to upgrade to v0.6, we can apply the below yaml, which will upgrade the driver to 0.6 release.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ kubectl apply -f https://github.com/openebs/zfs-localpv/blob/v0.6.x/deploy/zfs-operator.yaml
|
$ kubectl apply -f https://raw.githubusercontent.com/openebs/zfs-localpv/v0.6.x/deploy/zfs-operator.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
For future releases if you want to upgrade from v0.4 or v0.5 to the newer version replace `v0.6.x` to the desired version. Check everything is good after upgrading the zfs-driver. Then run the cleanup script to remove old CRDs
|
For future releases if you want to upgrade from v0.4 or v0.5 to the newer version replace `v0.6.x` to the desired version. Check everything is good after upgrading the zfs-driver. Then run the cleanup script to remove old CRDs
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue