Public | Automated Build

Last pushed: 6 months ago
Short Description
Postgres with change password
Full Description


  • install ansible
  • install python 3.5
  • mkvirtualenv --python=python3 infrastracture # optional, create virtualenv
  • workon infrastracture # optional, if case of virtualenv
  • pip install -r requirements.txt
  • ansible-galaxy install -r requirements-ansible.yml

pahaz development mode

All project sub repositories located in ./docks.local directory.

Development process:

  • inv init_all_repositories -- download all git repos (proctoring-project and kurento-node)
  • docker-compose up -- up all infrastructure locally
  • docker-compose exec web python migrate -- migrate db (if required)
  • docker-compose exec web python -- init db (if required)
  • docker-compose exec web bash -- run bash

What`s this?

This is a cluster infrastructure management tools.

We have many installations. Each installation have own name.
All installations located in environments directory.

All cluster manipulations available by inv subcommands.
(see and invoke python package for more detail)

You can choose the current installation setting the ENV environment variable.

How-to work with different environments easily?

  • ENV=test_examus_net inv rshell "id" -- run remote shell command id on all test_examus_net cluster nodes
  • ENV=tpahaz_examus_net inv rshell "id" -- run remote shell command id on all tpahaz_examus_net cluster nodes

Inv command examples

  • inv info -- show current cluster information
  • inv update_vms_state -- update current cluster state (create/destroy instances and update dns)
  • inv deploy -- release current cluster version (same as -f restart / see restart.yml)
  • inv deploy -f repair -- redeploy by repair.yml file
  • inv deploy -f redeploy -- redeploy all (deploy from scratch / from zero to one / full redeploy / see redeploy.yml)
  • inv deploy -f redeploy -u -- redeploy all + update_vms_state
  • inv rshell -n node7 "echo 1" -- run remote shell command echo 1 on node7 cluster nodes
  • inv rshell "docker pull examus/proctoring-project" -- pull private image from docker hub
  • inv rshell -n rt1 "cd docks && ./ restart web" -- restart service by ssh (
  • inv rshell -n rt1 "cd docks && ./ exec -T web python migrate" -- migrate by ssh (
  • inv rput ./xxx -- put the file to /root/xxx
  • inv rget ./xxx -- download the file from all nodes to ./_rget/{{ node_name }}/<path>

Extra inv command examples (DEPRECATED)

  • inv rdocker -n node7 "pull swarm" -- run docker command on remote server
  • inv rdocker -p 2375 -n node7 "info" -- run command on swarm master (node7)
  • inv rdockercompose -n node7 "build" -- run docker-compose on remote server
  • inv rdockercompose -n node7 "exec web python3 migrate"
  • inv rdockercompose -n node7 "exec web python3"

./environments/* file format

  • DROPLET_PROVIDER -- droplet provider. We have a support for Azure (azure), Digitaloushen (do) and Statical servers (static)


The simplest droplet provider. Suitable for set of statically created VMs.
See the as example.

You need to set the DROPLETS variable. The IPs -> hostname map.

    '': 'tpahaz',


Dynamically droplet provider.
Support power_up, power_down, _create_x2_nodes and _delete_x2_nodes features.

You need to have an image and access token.

DROPLET_DO_TOKEN = '2a3823c2846a05f2968944eaff04d1f87081359cf470763f793deb0a7c8c6add'  # noqa
DROPLET_DO_IMAGE = '23776649'  # base image ID for all nodes


Dynamically droplet provider.
Support power_up, power_down, _create_x2_nodes and _delete_x2_nodes features.

NOTE: For work with Azure you need install az CLI (see: and run az login before use.

You need to have an image and security group.

DROPLET_AZURE_IMAGE = 'ubuntyu-test-image-20170524'
DROPLET_AZURE_SIZE = 'Standard_DS1_v2'

naming configuration (only for dynamic droplet providers)

You can set the name pattern for your droplets.
This feature provides your possibility to have more then one environment in one account/region/resource group.

NAME_SIGNER_SECRET = 'AWdjn2j21WEQjen21123w'
NAME_PREFIX = 'pahaz'

Use case: you can change the NAME_PREFIX and create new droplets by _create_x2_nodes.
As the result you have the two different droplet sets.

cluster configuration

  • CLUSTER -- droplet configuration (node groups and node variables)
    '<node1-name>': (['<node1-group1>', '<node1-group2>', ...], {
        '<node1-key1>': <node1-key1-value>,
        '<node1-key2>': <node1-key2-value>,
    '<node2-name>': (['<node2-group1>', '<node2-group2>', ...], {
        '<node2-key1>': <node2-key1-value>,
        '<node2-key2>': <node2-key2-value>,
  • VARIABLES -- nodes common variables
    '<key1>': <key1-value>,
    '<key2>': <key2-value>,

For dynamic droplet providers you can configure ADDITIONAL_DROPLET_NAME_PREFIX and ADDITIONAL_DROPLET_SETTINGS.

ADDITIONAL_DROPLET_SETTINGS = (['<additional-nodes-group1>', '<additional-nodes-group2>', ...], {
        '<additional-nodes-key1>': <additional-nodes-key1-value>,
        '<additional-nodes-key2>': <additional-nodes-key2-value>,

You can set the variables for each nodes in CLUSTER or set the variables for all nodes in VARIABLES.
This variables are use for Ansible roles.

You can check Ansible stuff here:

  • roles
  • *.yml
  • group_vars

In a few words Environments provides you possibility to set the Ansible variables.

Please, don't use the group_vars for per installation variables use the VARIABLES instead.


We have two installations az7_examus_net (Azure) and node7_examus_net (DO).
NOTE: For work with Azure you need install az CLI (see: and run az login before use.

We want to test 300 students from Azure to DO node7.

  1. Create testings droplets on the Azure.
    ENV=az7_examus_net inv _create_x2_nodes -c 50

  2. Power UP DO droplets.
    ENV=node7_examus_net inv power_up

  3. Add extra KMS nodes.
    ENV=node7_examus_net inv _create_x2_nodes -c 10

  4. Repair consul cluster for new KMS nodes.
    ENV=node7_examus_net inv deploy -f repair.yml

  5. SetUp script. Change variables: ENV, ENV_GROUP and TARGET.

  6. Run (run the loadtesting scripts forever as a daemon).

  7. Copy all nodes IPs from .inventory for az7_examus_net.
    Run /Applications/Google\\ Chrome --new-window http://<IP1>:6080/?password=zxc123
    or open http://<IP1>:6080/?password=zxc123 for all IPs.

  8. See the result

Sberbank testing mode

  • ENV=node7_examus_net inv power_up -- power up cluster
  • ENV=node7_examus_net inv _create_x2_nodes -c 10 -- up 10 KMS nodes
  • ENV=az7_examus_net inv _create_x2_nodes -c 50 -- up 50 testing nodes
  • ENV=node7_examus_net inv deploy -f repair -- repair consul cluster and new nodes
  • ENV=node7_examus_net inv deploy -f datadog -- setup Datadog monitoring
  • ./ -- run load test

Real examples

  • inv rshell 'docker rm $(docker ps -a -q)' -- delete all containers
  • inv rshell 'docker rmi $(docker images -q)' -- delete all images
  • inv rshell 'docker rmi $(docker images -q --filter "dangling=true")'

  • docker run -v consul:/consul/data -v /etc/consul/config.json:/consul/config.json:ro -v /etc/consul/ssl/:/consul/ssl/ -p 8300:8300 -p 8301:8301 --p 8400:8400 -p 8500:8500 -p 53:8600 -it consul /bin/sh

  • docker-compose --tlsverify -H tcp:// --tlscacert=/Users/pahaz/PycharmProjects/infrastructure/_deploy_files/certs/docker/ca.pem --tlscert=/Users/pahaz/PycharmProjects/infrastructure/_deploy_files/certs/docker/cert.pem --tlskey=/Users/pahaz/PycharmProjects/infrastructure/_deploy_files/certs/docker/key.pem exec web /bin/bash
  • inv rshell -n x 'docker run -it -v /dev/shm:/dev/shm examus/selenium --server --server --login --password rdfpbghjrnjhbyuRj45 --students_count 3'
  • sudo nmap -n -PN -sT -sS -sU -p- -vvv -PS80,22

known problems

  • -- certs are readonly for root!
  • -- swarm uses consul token but work without consul key prefix!
  • -- build and push
  • -- ansible docker multi interface binding
  • -- consul ACL
  • -- docker -net=host
  • -- docker-compose v2 services dns don`t work
  • -- docker-compose pull problem!
  • -- ansible docker-py version check bug
  • -- docker and ufw
  • -- docker and IPTables

How-to fix/do

we have a consul problem!

inv rshell "service docker stop"
inv rshell "rm -rf /etc/consul/_data/*"
inv rshell "service docker start"
inv rshell "/root/docks/ restart"

we have a consul cert problem!

inv init_consul_certs
inv rshell "docker rm -f consul-agent"
inv rshell "docker rm -f consul-registrator"
inv rshell "rm -rf /etc/consul/_data/*"
inv rshell deploy -f repair

we have a docker problem!

inv rshell "service docker stop"
inv rshell "rm -rf /var/lib/docker"
inv rshell "service docker start"
inv deploy -f repair  # restore consul ! and redewnload containers !

we need to make the production hotfix now!

# create new image with fix!
# ssh to the nodes and run commands
cd /root/docks && ./ pull
# change image!
cd /root/docks && ./ up -d --no-deps --build --force-recreate SERVICE1 SERVICE2

we need to make the production hostfix as fast as possible! (we have only one container)

# !! ATTENTION: it works only if you hace ONE web container !!
# ssh to the nodes and run commands
cd /root/docks && ./ exec web bash
$ make fixes
./ restart web
# you mast create a new git branch with `<current-vesion>-fix<hotfix-number>` name

we need to see logs on a production

# ssh to the nodes and run commands
cd /root/docks && ./ logs --tail=100 -f SERVICE1

restore db from dump

docker cp pgdump.db postgres:/tmp/pgdump.db
docker exec -it postgres su postgres -c "dropdb project"
docker exec -it postgres su postgres -c "createdb project"
docker exec -it postgres su postgres -c "psql -d project -f /tmp/pgdump.db"

create db dump

docker exec -it postgres gosu postgres pg_dump project > /tmp/pgdump.db
docker cp postgres:/tmp/pgdump.db pgdump.db

connect to psql

root@ubuntu:~/docks# docker exec -it postgres su postgres -c "psql"
psql (9.5.3)
Type "help" for help.

postgres=# \l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
 project   | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
(4 rows)

postgres=# \c project
You are now connected to database "project" as user "postgres".
project=# \dt
                          List of relations
 Schema |                  Name                   | Type  |  Owner
 public | alerts_alert                            | table | postgres
 public | attachments_attachment                  | table | postgres
 public | attachments_user_files                  | table | postgres
 public | auth_group                              | table | postgres
 public | auth_group_permissions                  | table | postgres
 public | auth_permission                         | table | postgres
 public | authtoken_token                         | table | postgres
 public | cal_entry                               | table | postgres
 public | cal_slot                                | table | postgres
 public | customsettings_setting                  | table | postgres
 public | django_admin_log                        | table | postgres
 public | django_comment_flags                    | table | postgres
 public | django_comments                         | table | postgres

need docker cleanup

# remove exited containers
docker ps --filter status=dead --filter status=exited -aq | xargs -r docker rm -v

# remove unused images
docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi

# (WARNING: you must understand it) remove unused volumes
docker volume ls -qf dangling=true | xargs -r docker volume rm

or just:

docker container prune
docker image prune
docker network prune
docker volume prune

want to do some cron like job at night

# add your task to `_deploy_files/`
# then run redeploy for deploing updated `common` role
Docker Pull Command
Source Repository