Public Repository

Last pushed: a month ago
Short Description
Percona XtraDB Cluster docker image | https://github.com/percona-lab/percona-docker/
Full Description

Percona XtraDB Cluster docker image

The docker image is available right now at percona/percona-xtradb-cluster.
The image supports work in Docker Network, including overlay networks,
so that you can install Percona XtraDB Cluster nodes on different boxes.
There is an initial support for the etcd discovery service.

Basic usage

For an example, see the start_node.sh script.

The CLUSTER_NAME environment variable should be set, and the easiest to do it is:
export CLUSTER_NAME=cluster1

The script will try to create an overlay network ${CLUSTER_NAME}_net.
If you want to have a bridge network or network with a specific parameter,
create it in advance.
For example:
docker network create -d bridge ${CLUSTER_NAME}_net

The Docker image accepts the following parameters:

  • One of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD or MYSQL_RANDOM_ROOT_PASSWORD must be defined
  • The image will create the user xtrabackup@localhost for the XtraBackup SST method. If you want to use a password for the xtrabackup user, set XTRABACKUP_PASSWORD.
  • If you want to use the discovery service (right now only etcd is supported), set the address to DISCOVERY_SERVICE. The image will automatically find a running cluser by CLUSTER_NAME and join to the existing cluster (or start a new one).
  • If you want to start without the discovery service, use the CLUSTER_JOIN variable. Empty variables will start a new cluster, To join an existing cluster, set CLUSTER_JOIN to the list of IP addresses running cluster nodes.

Discovery service

The cluster will try to register itself in the discovery service, so that new nodes or ProxySQL can easily find running nodes.

Assuming you have the variable ETCD_HOST set to IP:PORT of the running etcd (e.g., export ETCD_HOST=10.20.2.4:2379), you can explore the current settings by using
curl http://$ETCD_HOST/v2/keys/pxc-cluster/$CLUSTER_NAME/?recursive=true | jq.

Example output:

{
  "action": "get",
  "node": {
    "key": "/pxc-cluster/cluster4",
    "dir": true,
    "nodes": [
      {
        "key": "/pxc-cluster/cluster4/10.0.5.2",
        "dir": true,
        "nodes": [
          {
            "key": "/pxc-cluster/cluster4/10.0.5.2/ipaddr",
            "value": "10.0.5.2",
            "modifiedIndex": 19600,
            "createdIndex": 19600
          },
          {
            "key": "/pxc-cluster/cluster4/10.0.5.2/hostname",
            "value": "2af0a75ce0cb",
            "modifiedIndex": 19601,
            "createdIndex": 19601
          }
        ],
        "modifiedIndex": 19600,
        "createdIndex": 19600
      },
      {
        "key": "/pxc-cluster/cluster4/10.0.5.3",
        "dir": true,
        "nodes": [
          {
            "key": "/pxc-cluster/cluster4/10.0.5.3/ipaddr",
            "value": "10.0.5.3",
            "modifiedIndex": 26420,
            "createdIndex": 26420
          },
          {
            "key": "/pxc-cluster/cluster4/10.0.5.3/hostname",
            "value": "cfb29833f1d6",
            "modifiedIndex": 26421,
            "createdIndex": 26421
          }
        ],
        "modifiedIndex": 26420,
        "createdIndex": 26420
      }
    ],
    "modifiedIndex": 19600,
    "createdIndex": 19600
  }
}

Currently there is no automatic cleanup for the discovery service registry. You can remove all entries using
curl http://$ETCD_HOST/v2/keys/pxc-cluster/$CLUSTER_NAME?recursive=true -XDELETE.

Starting a discovery service

For the full documentation, please check https://coreos.com/etcd/docs/latest/docker_guide.html.

A simple script to start 1-node etcd (assuming ETCD_HOST variable is defined) is:

ETCD_HOST=${ETCD_HOST:-10.20.2.4:2379}
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
 --name etcd quay.io/coreos/etcd \
 -name etcd0 \
 -advertise-client-urls http://${ETCD_HOST}:2379,http://${ETCD_HOST}:4001 \
 -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
 -initial-advertise-peer-urls http://${ETCD_HOST}:2380 \
 -listen-peer-urls http://0.0.0.0:2380 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://${ETCD_HOST}:2380 \
 -initial-cluster-state new

Running a Docker overlay network

The following link is a great introduction with easy steps on how to run a Docker overlay network: http://chunqi.li/2015/11/09/docker-multi-host-networking/

Running with ProxySQL

The ProxySQL image https://hub.docker.com/r/perconalab/proxysql/
provides an integration with Percona XtraDB Cluster and discovery service.

You can start proxysql image by

docker run -d -p 3306:3306 -p 6032:6032 --net=$NETWORK_NAME --name=${CLUSTER_NAME}_proxysql \
        -e CLUSTER_NAME=$CLUSTER_NAME \
        -e ETCD_HOST=$ETCD_HOST \
        -e MYSQL_ROOT_PASSWORD=Theistareyk \
        -e MYSQL_PROXY_USER=proxyuser \
        -e MYSQL_PROXY_PASSWORD=s3cret \
        perconalab/proxysql

where MYSQL_ROOT_PASSWORD is the root password for the MySQL nodes. The password is needed to register the proxy user. The user MYSQL_PROXY_USER with password MYSQL_PROXY_PASSWORD will be registered on all Percona XtraDB Cluster nodes.

Running docker exec -it ${CLUSTER_NAME}_proxysql add_cluster_nodes.sh will register all nodes in the ProxySQL.

Docker Pull Command
Owner
percona

Comments (6)
nmtan
8 days ago

Even with discovery service, No containers with data (storage files) in them can be restarted, if you have to turn one container off (for any reason), when you try to start it again, it will JUST KEEP RESTARTING EVERY MINUTE. All of this suggest that the only way for this to work is to have container without storage, quite a useless thing for databases, don't you think?

And the logs show NO SIGN of anything.

I struggled with this in July for 2 months, then abandoned it all together. And now once I had to return to this due to project need, all I can find is no improvement whatsoever.

nmtan
8 days ago

I pretty much run into problem EVERYTIME I use this.

The containers can't be restarted, because if they are, they will fail to boot up.

If I initialize a new cluster, and have some containers joining it, then the FIRST ONE (the one that initialize the whole thing) can't restart and rejoin.

Pretty much, NOTHING can join the cluster if the first container is restarted for some reason.

log command doen't work, log files are scatter in /var/log/mysqld.log and /var/log/mysql (which is a dedicated volume for some reason), so retrieving logs to debug is pretty much manual work

iguanait
3 months ago

Please add support for MYSQL_ROOT_PASSWORD_FILE and XTRBACKUP_PASSWORD_FILE to use docker secrets feature and keep password in save.

zozo6015
5 months ago

I am trying to make this run using DISCOVERY_SERVICE with rkt. All seam to start but the node discovery does not working. The command I am using looks like this:

rkt run --hostname=$(hostname) --set-env=MYSQL_ROOT_PASSWORD=somepass --set-env=DISCOVERY_SERVICE={IP_ADDRESS}:2379 --set-env=CLUSTER_NAME=somepass --set-env=XTRBACKUP_PASSWORD=pass --insecure-options=image docker://perconalab/percona-xtradb-cluster:latest

Any suggestion?

man4j
8 months ago

Why I need etcd if i can run cluster in swarm mode? F.e. I create first node for bootstrap: docker service create --network skynet -e "CLUSTER_NAME=mycluster" -e "MYSQL_ROOT_PASSWORD=PassWord123" --name mysql_init percona/percona-xtradb-cluster:5.7.16

Then I run second node and join it to cluster:
docker service create --network skynet -e "CLUSTER_NAME=mycluster" -e "MYSQL_ROOT_PASSWORD=PassWord123" -e "CLUSTER_JOIN=mysql_init,mysql" --name mysql percona/percona-xtradb-cluster:5.7.16

Then I no longer need for first node. I must remove this: docker service rm mysql_init

And now I can scale my galera cluster up and down: docker service scale mysql=3

lmanliang
a year ago

Because different host different network segments.
We can only use "--net = 'host"
exsample:

NODENAME=node1
DockerUser=lman
MYSQLROOTPASSWORD=test
XTRABACKUPPASSWORD=test
CLUSTER_NAME=test
MASTERIP=10.0.0.23

if [ $PXC == "node1" ] ; then
docker run --name $NODENAME --net="host" -e CLUSTER_NAME=$CLUSTER_NAME -e MYSQL_ROOT_PASSWORD=$MYSQLROOTPASSWORD -d percona/percona-xtradb-cluster
else
docker run --name $NODENAME --net="host" -e CLUSTER_NAME=$CLUSTER_NAME -e MYSQL_ROOT_PASSWORD=$MYSQLROOTPASSWORD -e CLUSTER_JOIN=$MASTERIP -d percona/percona-xtradb-cluster
fi