feat(e2e-test): Add e2e-tests for zfs-localpv (#298)

Signed-off-by: w3aman <aman.gupta@mayadata.io>
This commit is contained in:
Aman Gupta 2021-06-09 21:21:39 +05:30 committed by GitHub
parent 53f872fcf1
commit 4e73638b5a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
137 changed files with 8745 additions and 0 deletions

View file

@ -0,0 +1,59 @@
## About the experiment
- After zfs-driver:v0.7.x user can label the nodes with the required topology, the zfs-localpv driver will support all the node labels as topology keys. This experiment verifies this custom-topology support for zfs-localpv. Volume should be provisioned on only such nodes which have been labeled with the keys set via the storage-class.
- In this experiment we cover two scenarios such as one with immediate volume binding and other with late binding (i.e. WaitForFirstConsumer). If we add a label to node after zfs-localpv driver deployment and using late binding mode, then a restart of all the node agents are required so that the driver can pick the labels and add them as supported topology key. Restart is not required in case of immediate volumebinding irrespective of if we add labels after zfs-driver deployment or before.
## Supported platforms:
K8s : 1.18+
OS : Ubuntu, CentOS
ZFS : 0.7, 0.8
ZFS-LocalPV version: 0.7+
## Entry-Criteria
- K8s cluster should be in healthy state including all desired nodes in ready state.
- zfs-controller and node-agent daemonset pods should be in running state.
## Steps performed
- select any of the two nodes randomly from the k8s cluster and label them with some key.
- deploy five applications using the pvc, provisioned by storage class in which volume binding mode is immediate.
- verify that pvc is bound and application pod is in running state.
- verify that volume is provisioned on only those nodes which was labeled prior to the provisioning.
- after that deploy five more applications, using the pvc provisioned by storage class in which volume binding mode is waitforfirstconsumer.
- check that pvc remains in pending state.
- restart the csi node-agent pods on all nodes.
- verify that new topology keys are now present in csi-nodes.
- now pvc should come into Bound state and application should be in running state.
- verify that volume is provisioned on only those nodes which was labeled.
- At end of test, remove the node labels and restart csi nodes so that custom-labels will be removed from csi node.
## How to run
- This experiment accepts the parameters in form of kubernetes job environmental variables.
- For running this experiment of zfspv custom topology, first clone openens/zfs-localpv[https://github.com/openebs/zfs-localpv] repo and then apply rbac and crds for e2e-framework.
```
kubectl apply -f zfs-localpv/e2e-tests/hack/rbac.yaml
kubectl apply -f zfs-localpv/e2e-tests/hack/crds.yaml
```
then update the needed test specific values in run_e2e_test.yml file and create the kubernetes job.
```
kubectl create -f run_e2e_test.yml
```
All the env variables description is provided with the comments in the same file.
After creating kubernetes job, when the jobs pod is instantiated, we can see the logs of that pod which is executing the test-case.
```
kubectl get pods -n e2e
kubectl logs -f <zfspv-custom-topology-xxxxx-xxxxx> -n e2e
```
To get the test-case result, get the corresponding e2e custom-resource `e2eresult` (short name: e2er ) and check its phase (Running or Completed) and result (Pass or Fail).
```
kubectl get e2er
kubectl get e2er zfspv-custom-topology -n e2e --no-headers -o custom-columns=:.spec.testStatus.phase
kubectl get e2er zfspv-custom-topology -n e2e --no-headers -o custom-columns=:.spec.testStatus.result
```

View file

@ -0,0 +1,12 @@
#!/bin/bash
set -e
mkdir app_yamls_immediate
for i in $(seq 1 5)
do
sed "s/pvc-custom-topology/pvc-custom-topology-$i/g" busybox.yml > app_yamls_immediate/busybox-$i.yml
sed -i "s/busybox-deploy-custom-topology/busybox-deploy-custom-topology-$i/g" app_yamls_immediate/busybox-$i.yml
sed -i "s/storageClassName: zfspv-custom-topology/storageClassName: zfspv-custom-topology-immediate/g" app_yamls_immediate/busybox-$i.yml
done

View file

@ -0,0 +1,12 @@
#!/bin/bash
set -e
mkdir app_yamls_wfc
for i in $(seq 1 5)
do
sed "s/pvc-custom-topology/pvc-custom-topology-$i/g" busybox.yml > app_yamls_wfc/busybox-$i.yml
sed -i "s/busybox-deploy-custom-topology/busybox-deploy-custom-topology-$i/g" app_yamls_wfc/busybox-$i.yml
sed -i "s/storageClassName: zfspv-custom-topology/storageClassName: zfspv-custom-topology-wfc/g" app_yamls_wfc/busybox-$i.yml
done

View file

@ -0,0 +1,42 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-deploy-custom-topology
labels:
test: zfspv-custom-topology
spec:
selector:
matchLabels:
test: zfspv-custom-topology
template:
metadata:
labels:
test: zfspv-custom-topology
spec:
containers:
- name: app-busybox
imagePullPolicy: IfNotPresent
image: gcr.io/google-containers/busybox
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 10;done"]
env:
volumeMounts:
- name: data-vol
mountPath: /busybox
volumes:
- name: data-vol
persistentVolumeClaim:
claimName: pvc-custom-topology
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-custom-topology
spec:
storageClassName: zfspv-custom-topology
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi

View file

@ -0,0 +1,33 @@
apiVersion: batch/v1
kind: Job
metadata:
generateName: zfspv-custom-topology-
namespace: e2e
spec:
template:
metadata:
labels:
test: zfspv-custom-topology
spec:
serviceAccountName: e2e
restartPolicy: Never
containers:
- name: ansibletest
image: openebs/zfs-localpv-e2e:ci
imagePullPolicy: IfNotPresent
env:
- name: ANSIBLE_STDOUT_CALLBACK
value: default
- name: APP_NAMESPACE
value: 'custom-ns'
- name: ZPOOL_NAME
value: 'zfs-test-pool'
- name: NODE_LABEL
value: 'test=custom-topology'
command: ["/bin/bash"]
args: ["-c", "ansible-playbook ./e2e-tests/experiments/functional/zfspv-custom-topology/test.yml -i /etc/ansible/hosts -vv; exit 0"]

View file

@ -0,0 +1,31 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zfspv-custom-topology-wfc
allowVolumeExpansion: true
parameters:
fstype: "zfs"
poolname: "{{ zpool_name }}"
provisioner: zfs.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: {{ lkey }}
values:
- {{ lvalue }}
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zfspv-custom-topology-immediate
allowVolumeExpansion: true
parameters:
fstype: "zfs"
poolname: "{{ zpool_name }}"
provisioner: zfs.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: {{ lkey }}
values:
- {{ lvalue }}

View file

@ -0,0 +1,304 @@
- hosts: localhost
connection: local
gather_facts: False
vars_files:
- test_vars.yml
tasks:
- block:
## Generating the testname for zfspv custom-topology support test
- include_tasks: /e2e-tests/hack/create_testname.yml
## Record SOT (start of test) in e2e result e2e-cr (e2e-custom-resource)
- include_tasks: /e2e-tests/hack/update_e2e_result_resource.yml
vars:
status: 'SOT'
- name: Get any of the two nodes from cluster which are not having `noSchedule` taints
shell: >
kubectl get nodes --no-headers -o custom-columns=:.metadata.name,:.spec.taints |
grep -v NoSchedule | shuf -n 2 | awk '{print $1}'
args:
executable: /bin/bash
register: node_list
- name: Label the nodes with custom labels
shell: >
kubectl label node {{ item }} {{ node_label }}
args:
executable: /bin/bash
register: status
failed_when: "status.rc != 0"
with_items: "{{ node_list.stdout_lines }}"
- name: Split the node label into key and values
set_fact:
lkey: "{{ node_label.split('=')[0] }}"
lvalue: "{{ node_label.split('=')[1] }}"
- name: Update the storage_class template with test specific values
template:
src: storage_class.j2
dest: storage_class.yml
- name: Create the storage class yaml
shell: kubectl create -f storage_class.yml
args:
executable: /bin/bash
register: status
failed_when: "status.rc != 0"
- name: Create namespace
shell: kubectl create ns {{ app_ns }}-immediate
args:
executable: /bin/bash
- name: Apply the script for generating multiple busybox application yamls
shell: bash app_gen_immediate.sh
args:
executable: /bin/bash
- name: Apply the busybox yamls
shell: >
kubectl apply -f app_yamls_immediate/ -n {{ app_ns}}-immediate
args:
executable: /bin/bash
register: status
failed_when: "status.rc != 0"
- name: Get the pvc list
shell: kubectl get pvc -n {{ app_ns }}-immediate --no-headers -o custom-columns=:.metadata.name
args:
executable: /bin/bash
register: pvc_list
- name: Check the PVC status.
shell: kubectl get pvc {{ item }} -n {{ app_ns }}-immediate --no-headers -o custom-columns=:.status.phase
args:
executable: /bin/bash
register: pvc_status
with_items: "{{ pvc_list.stdout_lines }}"
until: " pvc_status.stdout == 'Bound'"
delay: 5
retries: 30
- name: Get the application pod list
shell: kubectl get pods -n {{ app_ns }}-immediate -l test=zfspv-custom-topology --no-headers -o custom-columns=:.metadata.name
args:
executable: /bin/bash
register: app_pod_list
- name: Check the application pod status
shell: >
kubectl get pods {{ item }} -n {{ app_ns }}-immediate --no-headers -o custom-columns=:.status.phase
args:
executable: /bin/bash
register: app_pod_status
with_items: "{{ app_pod_list.stdout_lines }}"
until: "app_pod_status.stdout == 'Running'"
delay: 5
retries: 20
- name: Check the container status
shell: >
kubectl get pods {{ item }} -n {{ app_ns }}-immediate --no-headers -o custom-columns=:.status.containerStatuses[*].state
args:
executable: /bin/bash
register: container_status
with_items: "{{ app_pod_list.stdout_lines }}"
until: "'running' in container_status.stdout"
delay: 2
retries: 30
- name: Verify that application pod is scheduled on node on which custom label is applied
shell: kubectl get pods {{ item }} -n {{ app_ns }}-immediate --no-headers -o custom-columns=:.spec.nodeName
args:
executable: /bin/bash
register: node_name
with_items: "{{ app_pod_list.stdout_lines }}"
failed_when: "'{{ node_name.stdout }}' not in node_list.stdout"
- name: Deprovision the application
shell: kubectl delete -f app_yamls_immediate/ -n {{ app_ns}}-immediate
args:
executable: /bin/bash
register: deprovision_status
failed_when: "deprovision_status.rc != 0"
- name: Delete the namespace
shell: kubectl delete ns {{ app_ns }}-immediate
args:
executable: /bin/bash
register: namespace_status
failed_when: "namespace_status.rc != 0"
- name: Create namespace
shell: kubectl create ns {{ app_ns}}-wfc
args:
executable: /bin/bash
- name: Apply the script for generating multiple busybox application yamls
shell: bash app_gen_wfc.sh
args:
executable: /bin/bash
- name: Apply the busybox yamls
shell: >
kubectl apply -f app_yamls_wfc/ -n {{ app_ns }}-wfc
args:
executable: /bin/bash
register: status
failed_when: "status.rc != 0"
## Restart of node-agent pods is required to get aware of node_labels
## Meanwhile PVC will be remain in pending state.
- name: Check all the pvc is in pending state.
shell: kubectl get pvc -n {{ app_ns }}-wfc --no-headers -o custom-columns=:.status.phase | sort | uniq
args:
executable: /bin/bash
register: pvc_status
failed_when: "pvc_status.stdout != 'Pending'"
- name: Restart the zfs node-agent pods in kube-system namespace
shell: kubectl delete pods -n kube-system -l app=openebs-zfs-node
args:
executable: /bin/bash
- name: Wait for 10 sec
shell:
sleep 10
- name: Check for the zfs node-agent pods to come into Running state
shell: >
kubectl get pods -n kube-system -l app=openebs-zfs-node
--no-headers -o custom-columns=:.status.phase | sort | uniq
args:
executable: /bin/bash
register: zfs_node_pod_status
until: "zfs_node_pod_status.stdout == 'Running'"
delay: 5
retries: 20
- name: Verify new topology key is now available in csi_nodes
shell: kubectl get csinode {{ item }} --no-headers -o custom-columns=:.spec.drivers[*].topologyKeys
args:
executable: /bin/bash
register: csi_node_keys
until: "'{{ lkey }}' in csi_node_keys.stdout"
delay: 2
retries: 20
with_items: "{{ node_list.stdout_lines }}"
- name: Get the pvc list
shell: kubectl get pvc -n {{ app_ns }}-wfc --no-headers -o custom-columns=:.metadata.name
args:
executable: /bin/bash
register: pvc_list
- name: Check the status of pvc
shell: kubectl get pvc {{ item }} -n {{ app_ns }}-wfc --no-headers -o custom-columns=:.status.phase
args:
executable: /bin/bash
register: pvc_status
with_items: "{{ pvc_list.stdout_lines }}"
until: "pvc_status.stdout == 'Bound'"
delay: 2
retries: 30
- name: Get the application pod list
shell: kubectl get pods -n {{ app_ns }}-wfc -l test=zfspv-custom-topology --no-headers -o custom-columns=:.metadata.name
args:
executable: /bin/bash
register: app_pod_list
- name: Check the application pod status
shell: >
kubectl get pods {{ item }} -n {{ app_ns }}-wfc --no-headers -o custom-columns=:.status.phase
args:
executable: /bin/bash
register: app_pod_status
with_items: "{{ app_pod_list.stdout_lines }}"
until: "app_pod_status.stdout == 'Running'"
delay: 5
retries: 20
- name: Check the container status
shell: >
kubectl get pods {{ item }} -n {{ app_ns }}-wfc --no-headers -o custom-columns=:.status.containerStatuses[*].state
args:
executable: /bin/bash
register: container_status
with_items: "{{ app_pod_list.stdout_lines }}"
until: "'running' in container_status.stdout"
delay: 2
retries: 30
- name: Verify that application pod is scheduled on node on which custom label is applied
shell: kubectl get pods {{ item }} -n {{ app_ns }}-wfc --no-headers -o custom-columns=:.spec.nodeName
args:
executable: /bin/bash
register: node_name
with_items: "{{ app_pod_list.stdout_lines }}"
failed_when: "'{{ node_name.stdout }}' not in node_list.stdout"
- name: Deprovision the application
shell: kubectl delete -f app_yamls_wfc/ -n {{ app_ns}}-wfc
args:
executable: /bin/bash
register: deprovision_status
failed_when: "deprovision_status.rc != 0"
- name: Delete the namespace
shell: kubectl delete ns {{ app_ns }}-wfc
args:
executable: /bin/bash
register: namespace_status
failed_when: "namespace_status.rc != 0"
- set_fact:
flag: "Pass"
rescue:
- set_fact:
flag: "Fail"
always:
- name: Remove the labels from node after end of test
shell: kubectl label node {{ item }} {{ lkey }}-
args:
executable: /bin/bash
register: label_status
with_items: "{{ node_list.stdout_lines }}"
failed_when: "label_status.rc != 0"
- name: Restart the zfs node-agent pods in kube-system namespace to remove label from csi-nodes
shell: kubectl delete pods -n kube-system -l app=openebs-zfs-node
args:
executable: /bin/bash
- name: Check for the zfs node-agent pods to come into Running state
shell: >
kubectl get pods -n kube-system -l app=openebs-zfs-node
--no-headers -o custom-columns=:.status.phase | sort | uniq
args:
executable: /bin/bash
register: zfs_node_pod_status
until: "zfs_node_pod_status.stdout == 'Running'"
delay: 5
retries: 20
- name: Delete the storage class
shell: kubectl delete -f storage_class.yml
args:
executable: /bin/bash
register: status
failed_when: "status.rc != 0"
# RECORD END-OF-TEST IN e2e RESULT CR
- include_tasks: /e2e-tests/hack/update_e2e_result_resource.yml
vars:
status: 'EOT'

View file

@ -0,0 +1,7 @@
test_name: zfspv-custom-topology
app_ns: "{{ lookup('env','APP_NAMESPACE') }}"
zpool_name: "{{ lookup('env','ZPOOL_NAME') }}"
node_label: "{{ lookup('env','NODE_LABEL') }}"