zfs-localpv/deploy/sample/percona.yaml

123 lines
2.9 KiB
YAML
Raw Normal View History

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zfspv-percona
allowVolumeExpansion: true
parameters:
volblocksize: "4k"
compression: "on"
dedup: "on"
thinprovision: "yes"
poolname: "zfspv-pool"
feat(ZFSPV): volume count based scheduler for ZFSPV (#8) This is an initial scheduler implementation for ZFS Local PV. * adding scheduler as a configurable option * adding volumeWeightedScheduler as scheduling logic The volumeWeightedScheduler will go through all the nodes as per topology information and it will pick the node which has less volume provisioned in the given pool. lets say there are 2 nodes node1 and node2 with below pool configuration :- ``` node1 | |-----> pool1 | | | |------> pvc1 | |------> pvc2 |-----> pool2 |------> pvc3 node2 | |-----> pool1 | | | |------> pvc4 |-----> pool2 |------> pvc5 |------> pvc6 ``` So if application is using pool1 as shown in the below storage class, then ZFS driver will schedule it on node2 as it has one volume as compared to node1 which has 2 volumes in pool1. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner: zfs.csi.openebs.io parameters: blocksize: "4k" compression: "on" dedup: "on" thinprovision: "yes" poolname: "pool1" ``` So if application is using pool2 as shown in the below storage class, then ZFS driver will schedule it on node1 as it has one volume only as compared node2 which has 2 volumes in pool2. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: openebs-zfspv provisioner: zfs.csi.openebs.io parameters: blocksize: "4k" compression: "on" dedup: "on" thinprovision: "yes" poolname: "pool2" ``` In case of same number of volumes on all the nodes for the given pool, it can pick any node and schedule the PV on that. Signed-off-by: Pawan <pawan@mayadata.io>
2019-11-06 21:20:49 +05:30
provisioner: zfs.csi.openebs.io
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zfspv-percona
spec:
storageClassName: zfspv-percona
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
name: sqltest
namespace: default
data:
sql-test.sh: |
#!/bin/bash
DB_PREFIX="Inventory"
DB_SUFFIX=`echo $(mktemp) | cut -d '.' -f 2`
DB_NAME="${DB_PREFIX}_${DB_SUFFIX}"
echo -e "\nWaiting for mysql server to start accepting connections.."
retries=10;wait_retry=30
for i in `seq 1 $retries`; do
mysql -uroot -pk8sDem0 -e 'status' > /dev/null 2>&1
rc=$?
[ $rc -eq 0 ] && break
sleep $wait_retry
done
if [ $rc -ne 0 ];
then
echo -e "\nFailed to connect to db server after trying for $(($retries * $wait_retry))s, exiting\n"
exit 1
fi
mysql -uroot -pk8sDem0 -e "CREATE DATABASE $DB_NAME;"
mysql -uroot -pk8sDem0 -e "CREATE TABLE Hardware (id INTEGER, name VARCHAR(20), owner VARCHAR(20),description VARCHAR(20));" $DB_NAME
mysql -uroot -pk8sDem0 -e "INSERT INTO Hardware (id, name, owner, description) values (1, "dellserver", "basavaraj", "controller");" $DB_NAME
mysql -uroot -pk8sDem0 -e "DROP DATABASE $DB_NAME;"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: percona
labels:
name: percona
spec:
replicas: 1
selector:
matchLabels:
name: percona
template:
metadata:
labels:
name: percona
spec:
containers:
- resources:
name: percona
image: openebs/tests-custom-percona:latest
imagePullPolicy: IfNotPresent
args:
- "--ignore-db-dir"
- "lost+found"
env:
- name: MYSQL_ROOT_PASSWORD
value: k8sDem0
ports:
- containerPort: 3306
name: percona
volumeMounts:
- mountPath: /var/lib/mysql
name: demo-vol1
- mountPath: /sql-test.sh
subPath: sql-test.sh
name: sqltest-configmap
livenessProbe:
exec:
command: ["bash", "sql-test.sh"]
initialDelaySeconds: 30
periodSeconds: 1
timeoutSeconds: 10
volumes:
- name: demo-vol1
persistentVolumeClaim:
claimName: zfspv-percona
- name: sqltest-configmap
configMap:
name: sqltest
---
apiVersion: v1
kind: Service
metadata:
name: percona-mysql
labels:
name: percona-mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
name: percona