mirror of
https://github.com/TECHNOFAB11/zfs-localpv.git
synced 2025-12-11 22:10:11 +01:00
feat(e2e-test): Add e2e-tests for zfs-localpv (#298)
Signed-off-by: w3aman <aman.gupta@mayadata.io>
This commit is contained in:
parent
53f872fcf1
commit
4e73638b5a
137 changed files with 8745 additions and 0 deletions
30
e2e-tests/experiments/zfs-localpv-provisioner/Dockerfile
Normal file
30
e2e-tests/experiments/zfs-localpv-provisioner/Dockerfile
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
# Copyright 2020-2021 The OpenEBS Authors. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
##########################################################################
|
||||
# This Dockerfile is used to create the image `quay.io/w3aman/zfsutils:ci`#
|
||||
# which is being used in the daemonset in the file `zfs_utils_ds.yml` #
|
||||
# Here we install zfs utils in the image so that zfs command can be run #
|
||||
# from the container, mainly to create zpool on desired nodes. #
|
||||
##########################################################################
|
||||
|
||||
FROM ubuntu:20.04
|
||||
|
||||
RUN apt-get update
|
||||
|
||||
RUN apt-get install sudo -y
|
||||
|
||||
RUN apt-get install zfsutils-linux -y
|
||||
|
||||
CMD [ "bash" ]
|
||||
64
e2e-tests/experiments/zfs-localpv-provisioner/README.md
Normal file
64
e2e-tests/experiments/zfs-localpv-provisioner/README.md
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
## About this experiment
|
||||
|
||||
This experiment deploys the zfs-localpv provisioner in kube-system namespace which includes zfs-controller statefulset (with default value of replica count 1) and csi-node agent deamonset. Apart from this, zpool creation on nodes and generic use-case storage-classes and snapshot class for dynamic provisioning of the volumes based on values provided via env's in run_e2e_test.yml file gets created in this experiment.
|
||||
|
||||
## Supported platforms:
|
||||
|
||||
K8s: 1.18+
|
||||
|
||||
OS: Ubuntu, CentOS
|
||||
|
||||
ZFS: 0.7, 0.8
|
||||
|
||||
## Entry-Criteria
|
||||
|
||||
- K8s cluster should be in healthy state including all the desired worker nodes in ready state.
|
||||
- External disk should be attached to the nodes for zpool creation on top of it.
|
||||
- If we dont use this experiment to deploy zfs-localpv provisioner, we can directly apply the zfs-operator file via command as mentioned below and make sure you have zpool created on desired nodes to provision volumes.
|
||||
```kubectl apply -f https://openebs.github.io/charts/zfs-operator.yaml```
|
||||
|
||||
## Exit-Criteria
|
||||
|
||||
- zfs-localpv components should be deployed successfully and all the pods including zfs-controller and csi node-agent daemonset are in running state.
|
||||
|
||||
## Steps performed
|
||||
|
||||
- zpool creation on nodes
|
||||
- if `ZPOOL_CREATION` env value is set to `true` zpool is created on the nodes.
|
||||
- selection of nodes on which zpool will be created, is taken via the values of `ZPOOL_NODE_NAME` env. if it is blank then zpool will be created on all worker nodes.
|
||||
- selected nodes will be labeled (if all nodes are used then labeling will be skipped as it is unnecessary) so that a privileged daemoset pods can schedule on those nodes and can create zpool on respected nodes by executing zpool create command via daemonset pods.
|
||||
- Delete the daemonset and remove label from nodes after zpool creation.
|
||||
- Download the operator file for zfs-localpv driver from `ZFS_BRANCH`.
|
||||
- Update the zfs-operator namespace if it is specified other than default value `openebs` in `ZFS_OPERATOR_NAMESPACE` env.
|
||||
- Update the zfs-driver image tag. (if specified other than ci tag)
|
||||
- Apply the operator yaml and wait for zfs-controller and csi-node agent pods to come up in Running state.
|
||||
- Create general use case storage_classes for dynamic volume provisioning.
|
||||
- Create one volumesnapshot class for capturing zfs volume snapshot.
|
||||
|
||||
## How to run
|
||||
|
||||
- This experiment accepts the parameters in form of kubernetes job environmental variables.
|
||||
- For running this experiment of deploying zfs-localpv provisioner, clone openebs/zfs-localpv[https://github.com/openebs/zfs-localpv] repo and then first apply rbac and crds for e2e-framework.
|
||||
```
|
||||
kubectl apply -f zfs-localpv/e2e-tests/hack/rbac.yaml
|
||||
kubectl apply -f zfs-localpv/e2e-tests/hack/crds.yaml
|
||||
```
|
||||
then update the needed test specific values in run_e2e_test.yml file and create the kubernetes job.
|
||||
```
|
||||
kubectl create -f run_e2e_test.yml
|
||||
```
|
||||
All the env variables description is provided with the comments in the same file.
|
||||
|
||||
After creating kubernetes job, when the job’s pod is instantiated, we can see the logs of that pod which is executing the test-case.
|
||||
|
||||
```
|
||||
kubectl get pods -n e2e
|
||||
kubectl logs -f <zfs-localpv-provisioner-xxxxx-xxxxx> -n e2e
|
||||
```
|
||||
To get the test-case result, get the corresponding e2e custom-resource `e2eresult` (short name: e2er ) and check its phase (Running or Completed) and result (Pass or Fail).
|
||||
|
||||
```
|
||||
kubectl get e2er
|
||||
kubectl get e2er zfs-localpv-provisioner -n e2e --no-headers -o custom-columns=:.spec.testStatus.phase
|
||||
kubectl get e2er zfs-localpv-provisioner -n e2e --no-headers -o custom-columns=:.spec.testStatus.result
|
||||
```
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-sc-ext4"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
volblocksize: "{{ record_size }}"
|
||||
compression: "{{ compress }}"
|
||||
dedup: "{{ de_dup }}"
|
||||
fstype: "ext4"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
## To create ZPOOL on only some of the node then mention node names in values with allowedTopologies
|
||||
##allowedTopologies:
|
||||
##- matchLabelExpressions:
|
||||
## - key: kubernetes.io/hostname
|
||||
## values:
|
||||
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-sc-xfs"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
volblocksize: "{{ vol_block_size }}"
|
||||
compression: "{{ compress }}"
|
||||
dedup: "{{ de_dup }}"
|
||||
fstype: "xfs"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
## To create ZPOOL on only some of the node then mention node names in values with allowedTopologies
|
||||
##allowedTopologies:
|
||||
##- matchLabelExpressions:
|
||||
## - key: kubernetes.io/hostname
|
||||
## values:
|
||||
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-sc"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
recordsize: "{{ record_size }}"
|
||||
compression: "{{ compress }}"
|
||||
dedup: "{{ de_dup }}"
|
||||
fstype: "zfs"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
## To create ZPOOL on only some of the node then mention node names in values with allowedTopologies
|
||||
##allowedTopologies:
|
||||
##- matchLabelExpressions:
|
||||
## - key: kubernetes.io/hostname
|
||||
## values:
|
||||
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-sc-btrfs"
|
||||
parameters:
|
||||
volblocksize: "{{ record_size }}"
|
||||
compression: "{{ compress }}"
|
||||
dedup: "{{ de_dup }}"
|
||||
fstype: "btrfs"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
## To create ZPOOL on only some of the node then mention node names in values with allowedTopologies
|
||||
##allowedTopologies:
|
||||
##- matchLabelExpressions:
|
||||
## - key: kubernetes.io/hostname
|
||||
## values:
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-raw-block"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-shared"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
shared: "yes"
|
||||
fstype: "zfs"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-xfs-shared"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
shared: "yes"
|
||||
fstype: "xfs"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-ext4-shared"
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
shared: "yes"
|
||||
fstype: "ext4"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: "zfspv-btrfs-shared"
|
||||
parameters:
|
||||
shared: "yes"
|
||||
fstype: "btrfs"
|
||||
poolname: "{{ zpool_name }}"
|
||||
provisioner: zfs.csi.openebs.io
|
||||
129
e2e-tests/experiments/zfs-localpv-provisioner/run_e2e_test.yml
Normal file
129
e2e-tests/experiments/zfs-localpv-provisioner/run_e2e_test.yml
Normal file
|
|
@ -0,0 +1,129 @@
|
|||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
generateName: zfs-localpv-provisioner-
|
||||
namespace: e2e
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
test: zfs-localpv-provisioner
|
||||
spec:
|
||||
serviceAccountName: e2e
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: ansibletest
|
||||
image: openebs/zfs-localpv-e2e:ci
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: ANSIBLE_STDOUT_CALLBACK
|
||||
#value: log_plays
|
||||
value: default
|
||||
|
||||
# This test will download the zfs-localpv operator file from this branch.
|
||||
# Change the env value according to versioned branch name for zfs-localpv provisioner
|
||||
# from openebs/zfs-localpv repo. for e.g. (v1.4.x , v1.5.x OR master)
|
||||
# by default test-specific value of `ZFS_BRANCH` is master.
|
||||
- name: ZFS_BRANCH
|
||||
value: 'master'
|
||||
|
||||
# After v1.5.0 in each branch of openebs/zfs-localpv repo zfs-localpv driver image is set to
|
||||
# `ci` tag `openebs/zfs-driver:ci`. Give the full image name here with desired image tag to replace
|
||||
# it with `ci` tag. for e.g. (openebs/zfs-driver:1.5.0). Leaving this env empty will
|
||||
# apply the operator yaml with by default present `ci` tag i.e. `openebs/zfs-driver:ci`
|
||||
- name: ZFS_DRIVER_IMAGE
|
||||
value: ''
|
||||
|
||||
# This is the namespace where the zfs driver will create all its resources.
|
||||
# By default it is in openebs namespace. If we want to change it to use a different
|
||||
# namespace change the value of this env with desired namespace name.
|
||||
- name: ZFS_OPERATOR_NAMESPACE
|
||||
value: 'openebs'
|
||||
|
||||
# In addition to provisioning of zfs-localpv driver if we want to create zpool on worker nodes,
|
||||
# use `true` as the value for this env else leave it blank or false. If zpool is already present and no need of zpool
|
||||
# creation via this test script then then set this value as `false`.
|
||||
# by default this `env` value is `false` and will skip zpool creation on nodes.
|
||||
- name: ZPOOL_CREATION
|
||||
value: ''
|
||||
|
||||
# In case if we have use value as `true` in `ZPOOL_CREATION` env, provide here
|
||||
# the name for zpool by which name it will be created via this test script else leave blank.
|
||||
# If we don't want to create volume group on nodes via this test but still
|
||||
# wants to create some generally used storage_classes for provisioning of zfs volumes
|
||||
# provide here the zpool name which you have already setted up and it will be
|
||||
# used in storage class template.
|
||||
# by default test-specific value of zpool name is `zfs-test-pool`.
|
||||
- name: ZPOOL_NAME
|
||||
value: 'zfs-test-pool'
|
||||
|
||||
# If we want to create encrypted zpool provide value `on` else `off`
|
||||
# by default value is `off`
|
||||
- name: ZPOOL_ENCRYPTION
|
||||
value: 'off'
|
||||
|
||||
# For creating encrypted zpool this test uses the keyformat as passphrase.
|
||||
# to create one such passphrase provide here a character string of minimum length as 8 (for e.g. test1234)
|
||||
# which will be used in automatically when zpool create command promts for passphrase.
|
||||
# by default this test will use password as `test1234` for zpool encryption
|
||||
# you can use a different one for your zpools.
|
||||
- name: ZPOOL_ENCRYPTION_PASSWORD
|
||||
value: 'test1234'
|
||||
|
||||
# This is the env to decide which type of zpool we want to create, or we have already set up
|
||||
# this type of zpool. by default test specific value for this env is `striped`.
|
||||
# supported values are (stripe, mirror, raidz, raidz2 and raidz3)
|
||||
- name: ZPOOL_TYPE
|
||||
value: 'stripe'
|
||||
|
||||
# In case if we have use value as `true` in `ZPOOL_CREATION` env, provide here
|
||||
# the name of the disks to use them for creation of zpool, else leave blank. for e.g. `/dev/sdb`
|
||||
# If we want to use more than one disk (when mirrored or raidz pools) give the names in space seperated format
|
||||
# for e.g. "/dev/sdb /dev/sdc"
|
||||
- name: ZPOOL_DISKS
|
||||
value: ''
|
||||
|
||||
# In case if we have use value as `true` in `ZPOOL_CREATION` env, provide here
|
||||
# the name of nodes on which we want zpools to be created. Leaving this blank
|
||||
# will create zpools on all the schedulabel nodes.
|
||||
# Provide node names in comma seperated format for e.g. ('node-1,node-2,node-3')
|
||||
- name: ZPOOL_NODE_NAMES
|
||||
value: ''
|
||||
|
||||
# If we want to create some generally used storage_classes and snapshot class for provisioning
|
||||
# of zfs volumes and taking zfs snapshots provide `true` as the value for this env. Leaving this value
|
||||
# blank will consider as false. by default test-specific value for this env is `true`.
|
||||
- name: STORAGE_CLASS_CREATION
|
||||
value: 'true'
|
||||
|
||||
# Snapshot class will be created with name which will be provided here
|
||||
# by default test specific value is
|
||||
- name: SNAPSHOT_CLASS
|
||||
value: 'zfs-snapshot-class'
|
||||
|
||||
# If data compression is needed use value: 'on' else 'off'
|
||||
# by default test-specific value is `off`
|
||||
- name: COMPRESSION
|
||||
value: 'off'
|
||||
|
||||
# If data duplication is needed give value: 'on' else 'off'
|
||||
# by default test-specific value is `off`
|
||||
- name: DEDUP
|
||||
value: 'off'
|
||||
|
||||
# This env value will be used in storage classes templates in case of xfs and ext or btrfs file system,
|
||||
# where we create a ZVOL a raw block device carved out of ZFS Pool.
|
||||
# provide the blocksize with which you want to create the block devices. by default test-specific value
|
||||
# will be `4k`. Supported values: Any power of 2 from 512 bytes to 128 Kbytes
|
||||
- name: VOLBLOCKSIZE
|
||||
value: '4k'
|
||||
|
||||
# This env value will be used in storage classes templates in case of zfs file system
|
||||
# provide recordsize which is the maximum block size for files and will be used to create ZFS datasets
|
||||
# by default test-specific value will be `4k`. Supported values: Any power of 2 from 512 bytes to 128 Kbytes
|
||||
- name: RECORDSIZE
|
||||
value: '4k'
|
||||
|
||||
command: ["/bin/bash"]
|
||||
args: ["-c", "ansible-playbook ./e2e-tests/experiments/zfs-localpv-provisioner/test.yml -i /etc/ansible/hosts -v; exit 0"]
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
kind: VolumeSnapshotClass
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: {{ snapshot_class }}
|
||||
annotations:
|
||||
snapshot.storage.kubernetes.io/is-default-class: "true"
|
||||
driver: zfs.csi.openebs.io
|
||||
deletionPolicy: Delete
|
||||
110
e2e-tests/experiments/zfs-localpv-provisioner/test.yml
Normal file
110
e2e-tests/experiments/zfs-localpv-provisioner/test.yml
Normal file
|
|
@ -0,0 +1,110 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
gather_facts: False
|
||||
|
||||
vars_files:
|
||||
- test_vars.yml
|
||||
|
||||
tasks:
|
||||
- block:
|
||||
|
||||
## Generating the testname for zfs localpv provisioner test
|
||||
- include_tasks: /e2e-tests/hack/create_testname.yml
|
||||
|
||||
## Record SOT (start of test) in e2e result e2e-cr (e2e-custom-resource)
|
||||
- include_tasks: /e2e-tests/hack/update_e2e_result_resource.yml
|
||||
vars:
|
||||
status: 'SOT'
|
||||
|
||||
- name: Create zpool on each desired worker node
|
||||
include_tasks: /e2e-tests/experiments/zfs-localpv-provisioner/zpool_creation.yml
|
||||
when: lookup('env','ZPOOL_CREATION') == 'true'
|
||||
|
||||
- name: Download OpenEBS zfs-localpv operator file
|
||||
get_url:
|
||||
url: https://raw.githubusercontent.com/openebs/zfs-localpv/{{ zfs_branch }}/deploy/zfs-operator.yaml
|
||||
dest: ./zfs_operator.yml
|
||||
force: yes
|
||||
register: result
|
||||
until: "'OK' in result.msg"
|
||||
delay: 5
|
||||
retries: 3
|
||||
|
||||
- name: Update the openebs zfs-driver image tag
|
||||
replace:
|
||||
path: ./zfs_operator.yml
|
||||
regexp: openebs/zfs-driver:ci
|
||||
replace: "{{ lookup('env','ZFS_DRIVER_IMAGE') }}"
|
||||
when: lookup('env','ZFS_DRIVER_IMAGE') | length > 0
|
||||
|
||||
- name: Update the namespace where we want to create zfs-localpv driver resources
|
||||
shell: >
|
||||
sed -i -e "/name: OPENEBS_NAMESPACE/{n;s/value: openebs/value: {{ zfs_operator_ns }}/g}" zfs_operator.yml &&
|
||||
sed -z "s/kind: Namespace\nmetadata:\n name: openebs/kind: Namespace\nmetadata:\n name: {{ zfs_operator_ns }}/" -i zfs_operator.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: update_status
|
||||
failed_when: "update_status.rc != 0"
|
||||
when: "zfs_operator_ns != 'openebs'"
|
||||
|
||||
- name: Apply the zfs_operator file to deploy zfs-driver components
|
||||
shell:
|
||||
kubectl apply -f ./zfs_operator.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: status
|
||||
failed_when: "status.rc != 0"
|
||||
|
||||
- name: Verify that the zfs-controller pod and zfs-node daemonset pods are running
|
||||
shell: >
|
||||
kubectl get pods -n kube-system -l role=openebs-zfs
|
||||
--no-headers -o custom-columns=:status.phase | sort | uniq
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: zfs_driver_components
|
||||
until: "zfs_driver_components.stdout == 'Running'"
|
||||
delay: 5
|
||||
retries: 30
|
||||
|
||||
- block:
|
||||
|
||||
- name: Update the storage class template with test specific values.
|
||||
template:
|
||||
src: openebs-zfspv-sc.j2
|
||||
dest: openebs-zfspv-sc.yml
|
||||
|
||||
- name: Create Storageclasses
|
||||
shell: kubectl apply -f openebs-zfspv-sc.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: sc_result
|
||||
failed_when: "sc_result.rc != 0"
|
||||
|
||||
- name: Update volume snapshot class template with the test specific variables.
|
||||
template:
|
||||
src: snapshot-class.j2
|
||||
dest: snapshot-class.yml
|
||||
|
||||
- name: Create VolumeSnapshotClass
|
||||
shell: kubectl apply -f snapshot-class.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: volsc_result
|
||||
failed_when: "volsc_result.rc != 0"
|
||||
|
||||
when: "{{ lookup('env','STORAGE_CLASS_CREATION') }} == true"
|
||||
|
||||
- set_fact:
|
||||
flag: "Pass"
|
||||
|
||||
rescue:
|
||||
- name: Setting fail flag
|
||||
set_fact:
|
||||
flag: "Fail"
|
||||
|
||||
always:
|
||||
## Record EOT (end of test) in e2e result e2e-cr (e2e-custom-resource)
|
||||
- include_tasks: /e2e-tests/hack/update_e2e_result_resource.yml
|
||||
vars:
|
||||
status: 'EOT'
|
||||
29
e2e-tests/experiments/zfs-localpv-provisioner/test_vars.yml
Normal file
29
e2e-tests/experiments/zfs-localpv-provisioner/test_vars.yml
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
test_name: zfs-localpv-provisioner
|
||||
|
||||
zfs_branch: "{{ lookup('env','ZFS_BRANCH') }}"
|
||||
|
||||
zfs_driver_image: "{{ lookup('env','ZFS_DRIVER_IMAGE') }}"
|
||||
|
||||
zfs_operator_ns: "{{ lookup('env','ZFS_OPERATOR_NAMESPACE') }}"
|
||||
|
||||
zpool_name: "{{ lookup('env','ZPOOL_NAME') }}"
|
||||
|
||||
zpool_encryption: "{{ lookup('env','ZPOOL_ENCRYPTION') }}"
|
||||
|
||||
enc_pwd: "{{ lookup('env','ZPOOL_ENCRYPTION_PASSWORD') }}"
|
||||
|
||||
zpool_type: "{{ lookup('env','ZPOOL_TYPE') }}"
|
||||
|
||||
zpool_disks: "{{ lookup('env','ZPOOL_DISKS') }}"
|
||||
|
||||
node_names: "{{ lookup('env','ZPOOL_NODE_NAMES') }}"
|
||||
|
||||
snapshot_class: "{{ lookup('env','SNAPSHOT_CLASS') }}"
|
||||
|
||||
compress: "{{ lookup('env','COMPRESSION') }}"
|
||||
|
||||
de_dup: "{{ lookup('env','DEDUP') }}"
|
||||
|
||||
record_size: "{{ lookup('env','RECORDSIZE') }}"
|
||||
|
||||
vol_block_size: "{{ lookup('env','VOLBLOCKSIZE') }}"
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: e2e-zfspv-bin
|
||||
namespace: e2e
|
||||
data:
|
||||
zfs: |
|
||||
#!/bin/sh
|
||||
if [ -x /host/sbin/zfs ]; then
|
||||
chroot /host /sbin/zfs "$@"
|
||||
elif [ -x /host/usr/sbin/zfs ]; then
|
||||
chroot /host /usr/sbin/zfs "$@"
|
||||
else
|
||||
chroot /host zfs "$@"
|
||||
fi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: zpool-creation
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: zfs-utils
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: zfs-utils
|
||||
spec:
|
||||
#nodeSelector:
|
||||
#test: zfs-utils
|
||||
containers:
|
||||
- name: zfsutils
|
||||
image: quay.io/w3aman/zfsutils:ci
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ['sh', '-c', 'echo Hello! && sleep 1800']
|
||||
volumeMounts:
|
||||
- name: udev
|
||||
mountPath: /run/udev
|
||||
- name: device
|
||||
mountPath: /dev
|
||||
- name: chroot-zfs
|
||||
mountPath: /sbin/zfs
|
||||
subPath: zfs
|
||||
- name: host-root
|
||||
mountPath: /host
|
||||
mountPropagation: "HostToContainer"
|
||||
readOnly: true
|
||||
securityContext:
|
||||
privileged: true
|
||||
tty: true
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /run/udev
|
||||
name: udev
|
||||
- hostPath:
|
||||
path: /dev
|
||||
name: device
|
||||
- name: chroot-zfs
|
||||
configMap:
|
||||
defaultMode: 0555
|
||||
name: e2e-zfspv-bin
|
||||
- name: host-root
|
||||
hostPath:
|
||||
path: /
|
||||
type: Directory
|
||||
130
e2e-tests/experiments/zfs-localpv-provisioner/zpool_creation.yml
Normal file
130
e2e-tests/experiments/zfs-localpv-provisioner/zpool_creation.yml
Normal file
|
|
@ -0,0 +1,130 @@
|
|||
---
|
||||
- block:
|
||||
- name: Get the list of nodes from the value of env's for zpool creation
|
||||
set_fact:
|
||||
node_list: "{{ node_names.split(',') }}"
|
||||
when: "node_names != ''"
|
||||
|
||||
- name: Get the list of all those nodes which are in Ready state and having no taints in cluster
|
||||
shell: >
|
||||
kubectl get nodes -o json | jq -r 'try .items[] | select(.spec.taints|not)
|
||||
| select(.status.conditions[].reason=="KubeletReady" and .status.conditions[].status=="True")
|
||||
| .metadata.name'
|
||||
register: schedulabel_nodes
|
||||
when: "node_names == ''"
|
||||
|
||||
# zpool creation command is `zpool create <zpool_name> <zpool_type> <disks>`
|
||||
# if it is striped pool then <zpool_type> will be replace by empty because
|
||||
# command for striped pool is `zpool create <pool_name> <disks>` and for other
|
||||
# type like mirror or raidz it will be replace by `zpool_type` env value.
|
||||
- name: Record the pool type value from env's
|
||||
set_fact:
|
||||
zpool_type_val: "{% if zpool_type == '' or zpool_type == 'stripe' %}{% else %} '{{ zpool_type }}' {% endif %}"
|
||||
|
||||
- block:
|
||||
|
||||
- name: Label the nodes for privileged DaemonSet pods to schedule on it
|
||||
shell: >
|
||||
kubectl label node {{ item }} test=zfs-utils
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: label_status
|
||||
failed_when: "label_status.rc != 0"
|
||||
with_items: "{{ node_list }}"
|
||||
|
||||
- name: Update the DaemonSet yaml to use nodes label selector
|
||||
shell: >
|
||||
sed -i -e "s|#nodeSelector|nodeSelector|g" \
|
||||
-e "s|#test: zfs-utils|test: zfs-utils|g" /e2e-tests/experiments/zfs-localpv-provisioner/zfs_utils_ds.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: status
|
||||
failed_when: "status.rc != 0"
|
||||
|
||||
when: "node_names != ''"
|
||||
|
||||
- name: Create a DaemonSet with privileged access for volume group creation on nodes
|
||||
shell: >
|
||||
kubectl apply -f /e2e-tests/experiments/zfs-localpv-provisioner/zfs_utils_ds.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: status
|
||||
failed_when: "status.rc != 0"
|
||||
|
||||
- name: Check if DaemonSet pods are in running state on all desired nodes
|
||||
shell: >
|
||||
kubectl get pods -n e2e -l app=zfs-utils
|
||||
--no-headers -o custom-columns=:.status.phase | sort | uniq
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: result
|
||||
until: "result.stdout == 'Running'"
|
||||
delay: 3
|
||||
retries: 40
|
||||
|
||||
- name: Get the list of DaemonSet pods
|
||||
shell: >
|
||||
kubectl get pods -n e2e -l app=zfs-utils --no-headers
|
||||
-o custom-columns=:.metadata.name
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: ds_pods_list
|
||||
|
||||
- name: Create non-encrypted zpool on desired worker nodes
|
||||
shell: >
|
||||
kubectl exec -ti {{ item }} -- bash -c 'zpool create {{ zpool_name }} {{ zpool_type_val }} {{ zpool_disks }}'
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: zpool_status
|
||||
failed_when: "zpool_status.rc != 0"
|
||||
with_items: "{{ ds_pods_list.stdout_lines }}"
|
||||
when: zpool_encryption == 'off' or zpool_encryption == ''
|
||||
|
||||
- name: Create encrypted zpool on desired worker nodes
|
||||
shell: >
|
||||
kubectl exec -ti {{ item }} -- bash -c "echo {{ enc_pwd }} | sudo -S su -c
|
||||
'zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt {{ zpool_name }} {{ zpool_type_val }} {{ zpool_disks }}'"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: enc_zpool_status
|
||||
failed_when: "enc_zpool_status.rc != 0"
|
||||
with_items: "{{ ds_pods_list.stdout_lines }}"
|
||||
when: "zpool_encryption == 'on'"
|
||||
|
||||
always:
|
||||
|
||||
# Here always block tasks will execute everytime irrespective of previous tasks result
|
||||
# so here we will delete daemonset pods and remove label which were created on nodes.
|
||||
# Here purpose for using `ignore_errors: true` is that if this test fails even before
|
||||
# creating daemonset or labeling the node then deleting them will fail as they don't exist.
|
||||
|
||||
- name: Delete the DaemonSet
|
||||
shell: >
|
||||
kubectl delete -f /e2e-tests/experiments/zfs-localpv-provisioner/zfs_utils_ds.yml
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: status
|
||||
failed_when: "status.rc != 0"
|
||||
ignore_errors: true
|
||||
|
||||
- name: Remove the label from nodes
|
||||
shell: >
|
||||
kubectl label node {{ item }} test-
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: label_status
|
||||
failed_when: "label_status.rc != 0"
|
||||
with_items: "{{ node_list }}"
|
||||
when: "node_names != ''"
|
||||
ignore_errors: true
|
||||
|
||||
- name: Remove the label from nodes
|
||||
shell: >
|
||||
kubectl label node {{ item }} test-
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: label_status
|
||||
failed_when: "label_status.rc != 0"
|
||||
with_items: "{{ schedulabel_nodes.stdout_lines }}"
|
||||
when: "node_names == ''"
|
||||
ignore_errors: true
|
||||
Loading…
Add table
Add a link
Reference in a new issue