Public Repository

Last pushed: a month ago
Short Description
Short description is empty for this repo.
Full Description


Set of scripts to run Percona software in OpenShift / Kubernetes / Google Cloud Kubernetes Engine


The best way to deploy the software suite is to use propose Helm charts.


pmm-server and pmm-client containers require a root privileges (RunAsUser: 0), so make sure Kubernetes or OpenShit allows it
if you plan to use PMM monitoring

To start pmm-server, from helm/helm-pmm-server execute:

helm install --name monitoring . -f values.yaml

It will expose a public IP address for the access

kubectl get service
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
monitoring-service   LoadBalancer   80:32516/TCP   10m

Percona XtraDB Cluster

Basic deployment

helm install --name cluster1 . -f values.yaml

By default will deploy proxysql in from of nodes and pmm-client on each node

kubectl get service
NAME                  READY     STATUS    RESTARTS   AGE
cluster1-node-0       2/2       Running   0          5m
cluster1-node-1       2/2       Running   0          4m
cluster1-node-2       2/2       Running   0          3m
cluster1-proxysql-0   2/2       Running   0          5m
monitoring-0          1/1       Running   0          1h

Connect to ProxySQL admin:

kubectl exec -it cluster1-proxysql-0 -c proxysql -- mysql -h127.0.0.1 -P6032 -uadmin -padmin

Connect to PXC via ProxySQL from a client application:

kubectl run -i --tty percona-client --image=percona:5.7 --restart=Never -- bash -il
root@percona-client:/# mysql -hcluster1-proxysql -uroot -psecr3t    

Master - N Slaves ReplicaSet

ReplicaSet is currently broken, ProxySQL is not supported

 helm install --name rs1 . -f values.yaml  --set kind=replicaset

Helm with OpenShift

PMM-Server and pmm-clients require to run under user:0 (root), which is complicated in OpenShift.
So the proper way to start a helm release is:

helm install --name dep1 . -f values.yaml  --set pmm.enabled=false,platform=openshift

Or edit values.yaml to change pmm.enabled and platform


To performa backups you need

  1. Create a persistent backup volume. Adjust the file backup-volume.yaml for your needs
  2. Execute a backup job. Example is in xtrabackup-job.yaml file, to perform backup run: kubectl apply -f xtrabackup-job.yaml

Restore from backup

To start the cluster from the backup

  1. Make sure the cluster is not running
  2. Locate directory you want to restore from on the backup volume, e.g. cluster1-node-0.cluster1-nodes-2018-06-18-17-26
  3. Adjust and run backup-restore job

Kubernetes deployments (without Helm)

MySQL Passwords

Before deployments you need to create passwords (secrets) which will be used to access Percona Server / Percona XtraDB Cluster.
We provide file as an example. Please use your own secure passwords!

Use base64 to encode a password for secret.yaml : echo -n 'securepassword' | base64.

Used base64 -d to decode a password from secret.yaml : echo YmFja3VwX3Bhc3N3b3Jk | base64 -d.


The proposed depoyments were tested on Kubernetes 1.9 / OpenShift Origin 3.9. The earlier versions may not work.

The deployments assume you have a default StorageClass which will provide Persistent Volumes. If not, you need to create PersistentVolume manually.


Percona XtraDB Cluster N nodes

Deployment pxc.yaml will create a StatefulSet with N nodes (defined in replicas: 3)
Pay attention to the service name, defined in name: pxccluster1


  • [ ] Encrypted connections from clients to PXC Nodes
  • [ ] Encrypted connections between PXC Nodes

ProxySQL service over Percona XtraDB Cluster

Deployment proxysql-pxc.yaml will create ProxySQL service and automatically configure to handle a traffic to Percona XtraDB Cluster service.
The service to handled is defined in line: - -service=pxccluster1


  • [ ] Encrypted connections from ProxySQL to PXC Nodes

A custom MySQL config.

The deployments support a custom MySQL config.
You can customize mysql-configmap.yaml to add any configuration lines you may need.
Next command will create a ConfigMap: kubectl create -f mysql-configmap.yaml. The ConfigMap must be created before any deployments.

Further work

  • [ ] Provide depoloyments for PMM Server
  • [ ] Configure nodes with PMM Client
  • [ ] Provide a guidance how to create / restore from backups


For OpenShift replace kubectl with oc

  • List available nodes kubectl get nodes
  • List running pods kubectl get pods
  • Create deployment kubectl create -f replica-set.yaml
  • Delete deployment kubectl delete -f replica-set.yaml
  • Watch pods changing during deployment watch kubectl get pods
  • Diagnostic about a pod, in case of failure kubectl describe po/rsnode-0
  • Logs from pods kubectl logs -f rsnode-0
  • Logs from the particular container in pod kubectl logs -f rsnode-1 -c clone-mysql
  • Access to bash in container kubectl exec rsnode-0 -it -- bash
  • Access to mysql in container kubectl exec rsnode-0 -it -- mysql -uroot -proot_password
  • Access to proxysql admin kubectl exec proxysql-0 -it -- mysql -uadmin -padmin -h127.0.0.1 -P6032


Oneliner to prepare sysbench-tpcc database

kubectl run sysbench1 --image=perconalab/sysbench --restart=Never --env="LUA_PATH=/sysbench/sysbench-tpcc/?.lua" --command -- sysbench-tpcc/tpcc.lua --mysql-host=cluster1-node-0.cluster1-nodes --mysql-user=root --mysql-password=secr3t --scale=10 --mysql-db=sbtest --db-driver=mysql --force-pk=1 prepare
Docker Pull Command