Public | Automated Build

Last pushed: 7 months ago
Short Description
WildFly + Ticket Monster running in HA mode (support for mod_cluster)
Full Description

Ticket-Monster Docker HA Cluster

This project contains several images that allows you to run Ticket Monster on a WildFly server.

The pieces of this demo are:

  • Wildfly 10.x Application Server (Standalone mode) + Ticket Monster application - Dockerfile
  • Postgres 9.x Database Server - Docker image
  • Apache HTTPD + mod_cluster (Using Server advertisement) - Docker image

Running the images

  1. Create a network


    docker network create mynet

  1. Start the postgres server container.


    docker run --name db -d -p 5432:5432 --net mynet -e POSTGRES_USER=ticketmonster -e POSTGRES_PASSWORD=ticketmonster-docker postgres

  1. Start the Apache httpd + modcluster.


    docker run -d --net mynet --name modcluster -e MODCLUSTER_NET="192.168. 172. 10." -e MODCLUSTER_PORT=80 -p 80:80 karm/mod_cluster-master-dockerhub

  1. Check /mcm (mod_cluster manager).

    Before starting the Wildfly servers, open /mcm that was exposed on port 80 in the previous step[3]


    open #For Linux containers
    active=docker-machine active; open http://`docker-machine ip $active`/mcm #For docker-machine containers

    Click on Auto Refresh link.

  2. Start the Wildfly server.


    docker run -d --name server1 --net mynet rafabene/wildfly-ticketmonster-ha

  1. Check at /mcm page that Wildfly was registered at modcluster.

  2. You can create as many wildfly instances you want.


    docker run -d --name server2 --net mynet rafabene/wildfly-ticketmonster-ha
    docker run -d --name server3 --net mynet rafabene/wildfly-ticketmonster-ha

  3. Access the application.


    open #For Linux containers
    active=docker-machine active; open http://`docker-machine ip $active`/ticket-monster #For docker-machine containers

  1. You can stop some servers and check the application behaviour.


    docker stop server1
    docker stop server2

  2. Clean up all containers.


    docker rm -f docker ps -aq

Ways to update Ticket-Monster version on a running container

Note: This is shown here for learning purposes. This approach is not recommended because the changes are not persisted and the updated version will be lost if the container is restarted.

With the WildFly server you can deploy your application in multiple ways:

  • You can use CLI
  • You can use the web console
  • You can use the deployment scanner

Remember to start the container exposing the port 9990.


docker run -d --name server1 --net mynet -p 9990 rafabene/wildfly-ticketmonster

Realize that we don't specify the host port and we let docker assign the port itself. This will avoid port colissions if running more than one WildFly instance in the same docker host.
You can query the docker host port associated with the running WilDFly container by executing:

docker port server1

You can check if the deployment worked by checking the container log:

docker logs -f server1

Using the CLI

If you have a local installation of WildFly, go to it's bin/ folder and run jboss-cli to connect to the running WildFly docker container.
NOTE: The usernmame and password credentials were set in the Docker image:

./ --controller=<DOCKER_HOST>:<HOST_PORT>  -u=admin -p=docker#admin -c

You can also use the docker inspect command to get the docker host port for 9990:

./ --controller=localhost:`docker inspect --format='{{$map := index .NetworkSettings.Ports "9990/tcp"}}{{$result := index $map 0}}{{$result.HostPort}}' server1` -u=admin -p=docker#admin -c #For Linux containers
active=`docker-machine active`; ./ --controller=`docker-machine ip $active`:`docker inspect --format='{{$map := index .NetworkSettings.Ports "9990/tcp"}}{{$result := index $map 0}}{{$result.HostPort}}' server1` -u='admin' -p='docker#admin' -c #For docker-machine containers

Once that you're connected through jboss-cli, run:

deploy <TICKET_MONSTER_PATH>/ticket-monster.war --force

Using the web console

NOTE: The usernmame and password credentials were set in the Docker image:

  • Go to the container administration web console in a web browser.
  • Log in with the following credentials: username: admin / password: docker#admin .
  • Go to the "Deployments tab".
  • Click on "Replace" button.
  • On the "Step 1/2" screen, select the ticket-monster.war file on your computer and click "Next".
  • On the "Step 2/2" screen, click "Next" again.

At this moment the new ticket-monster version should be deployed

Use the deployment scanner

To modify the content inside a running WildFly container that already have applications deployer, you will need to mount a volume from the docker container in the docker host.

In this example we will use the following host directory: ~/wildfly-deploy

First, we will need to start the containers mapping this directory ~/wildfly-deploy to /tmp/deploy inside the container

docker run -d --name server1 --net mynet -v ~/wildfy-deploy:/tmp/deploy rafabene/wildfly-ticketmonster

Then, copy the ticker-monster.war to ~/wildfly-deploy

cp ticket-monster.war ~/wildfy-deploy/

Finally execute a mv command inside the running container to move /tmp/deploy/ticket-monster.war to /opt/jboss/wildfly/standalone/deployments/

docker exec -it server1 /bin/bash -c 'mv /tmp/deploy/ticket-monster.war /opt/jboss/wildfly/standalone/deployments/'
Docker Pull Command
Source Repository

Comments (1)
a year ago

I am using mod_cluster in one UNIX machine and tomcat server in another unix machine.
Mod_cluster is registered by tomcat
Jun 23, 2016 8:38:56 AM org.jboss.modcluster.ModClusterService connectionEstablished
INFO: MODCLUSTER000012: Catalina connector will use /

but when i am trying to connect application using load balancer i am getting

[Thu Jun 23 08:51:27.417396 2016] [:debug] [pid 1437:tid 140305676351232] mod_proxy_cluster.c(293): Created: worker for ajp://
[Thu Jun 23 08:51:27.417524 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(1785): AH00924: worker ajp:// shared already initialized
[Thu Jun 23 08:51:27.417613 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(1832): AH00927: initializing worker ajp:// local
[Thu Jun 23 08:51:27.417739 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(1867): AH00930: initialized pool in child 1437 for ( min=0 max=25 smax=25
[Thu Jun 23 08:51:27.417836 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(1904): AH00932: ajp: worker for ( has been marked for retry
[Thu Jun 23 08:51:27.417973 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(2160): AH00942: ajp: has acquired connection for (
[Thu Jun 23 08:51:27.418051 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(2213): [client] AH00944: connecting ajp:// to
[Thu Jun 23 08:51:27.418125 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(2422): [client] AH00947: connected / to
[Thu Jun 23 08:51:27.418356 2016] [proxy:error] [pid 1437:tid 140305676351232] (111)Connection refused: AH00957: ajp: attempt to connect to ( failed
[Thu Jun 23 08:51:27.418394 2016] [proxy:error] [pid 1437:tid 140305676351232] AH00959: ap_proxy_connect_backend disabling worker for ( for 60s
[Thu Jun 23 08:51:27.418403 2016] [:debug] [pid 1437:tid 140305676351232] mod_proxy_cluster.c(1408): proxy_cluster_try_pingpong: can't connect to backend
[Thu Jun 23 08:51:27.418411 2016] [proxy:debug] [pid 1437:tid 140305676351232] proxy_util.c(2175): AH00943: ajp: has released connection for (
[Thu Jun 23 08:51:27.418424 2016] [:debug] [pid 1437:tid 140305676351232] mod_proxy_cluster.c(2438): proxy_cluster_isup: pingpong ajp:// failed