Public | Automated Build

Last pushed: 2 days ago
Short Description
mongo
Full Description

environments-tech

TL;DR

To run required backend DBs, simply cd docker then run docker-compose up -d

Tests

The tests will be run for you current local setup for both API-Node and cp_lib, so you still need to make sure that the test will be run against the desired codebase.

Attention: the cp_lib used by API-Node for the tests will be your cp_lib not the one in API_node/node_modules.

cd ./docker
./build.sh && ./build_test.sh
# build the docker test image

vim ./run_test.sh
# update the API_PATH and CP_LIB_PATH variables based on your local configuration

./run_test.sh
# start the tests on your local API-Node version

Get started

This tutorial will help you setup your working environments; they contain everything you need to run, on your machine, the following projects, locally, and in a preproduction or production scenarios:

  • the API (API-Node),
  • the old website/backoffice (website-backoffice),
  • the new backoffice (Backoffice).

The projects need to connect to a couple of external dependencies : MongoDB, RabbitMQ, redis, so the main goal of this tutorial is actually about them, more than about the projects themselves.

The local environment

In order to run the projects on your machine only (by running both the servers and the other dependencies), you need to fire up an instance of the aforementionned services. We use a Docker image that contains all the services for this.

Get Docker on Ubuntu:

Read it here.

  • Verify that you have curl installed.

    which curl
    

    If curl isn’t installed, install it after updating your manager:

    sudo apt-get update
    sudo apt-get install curl
    
  • Get the latest Docker package.

    curl -sSL https://get.docker.com/ | sh
    

    The system prompts you for your sudo password. Then, it downloads and installs Docker and its dependencies.

    Once you're through, restart your machine.

Get Docker on Mac OS X:

Read it here.

On Mac OS X, Docker must run within a Linux virtual machine (thought VirtualBox); if you need explanation about that, you can read it at the previous link.

Port forwarding to your localhost should be done by running:

VBoxManage controlvm "machine-name" natpf1 "MongoDB,tcp,,27017,,27017"
VBoxManage controlvm "machine-name" natpf1 "RabbitMQ,tcp,,5672,,5672"
VBoxManage controlvm "machine-name" natpf1 "Redis,tcp,,6379,,6379"

Build and run the Docker image

Note: if the internet connection sucks, you'd probably be better off loading it from a dump rather than building it from the Dockerfile.

The Dockerfile is a file that describes which packages must be installed and run; go to the docker/ folder in environements-tech, and run ./build.sh.

Note: it seems that build process hangs on "Setting up ca-certificates-java" on the docker-machine's default machine. To get around this issue create another machine with the --engine-storage-driver set to cp-machine.

docker-machine create -d virtualbox --engine-storage-driver overlay machine-name
docker-machine start machine-name
eval $(docker-machine env machine-name)
./build.sh

Once the image is built, you can run it by firing up ./run.sh. Congratulations, you have MongoDB, RabbitMQ, and redis running on your system. From then on, the steps are very similar.

Running the projects

Whether your are starting them locally or remotely, it's a piece of cake.

Provided you've got the following directory structure:

.
├── API-Node
│   ├── server.js
│   └── ...
├── Backoffice
│   ├── Gruntfile.js
│   └── ...
├── environments-tech
│   ├── README.md
│   ├── docker
│   ├── preproduction-api.sh
│   ├── preproduction-backoffice.sh
│   ├── preproduction-idp.sh
│   └── preproduction-website-backoffice.sh
└── website-backoffice
    ├── server.js
    └── ...

Fully local

You must first start a Docker process using the command introduced above. Then:

  • Start API-Node:

    cd API-Node/
    node server.js
    
  • Start website-backoffice:

    cd website-backoffice/
    node server.js
    
  • Start Identity Provider:

    cd identity-provider/
    npm run start (or nodemon --harmony server/index.js)
    

In preproduction mode

When doing this you will plug your local node server to the preprod Mongo, Redis and RabbitMQ.
You should consider dumping the preprod database on your computer so you can have preprod data
without messing with them (see next session).

  • Start API-Node:

    source ./environments-tech/preproduction-api.sh
    cd API-Node/
    node server.js
    
  • Start website-backoffice:

    source ./environments-tech/preproduction-website-backoffice.sh
    cd website-backoffice/
    node server.js
    
  • Start Backoffice:

    source ./environments-tech/preproduction-backoffice.sh
    cd Backoffice/
    grunt serve
    
  • Start Identity Provider:

    source ./environments-tech/preproduction-idp.sh
    cd identity-provider/
    npm run start (or nodemon --harmony server/index.js)
    

In preproduction dump mode

You will extract a mongo database from preprod and use it locally.
Go to compose and download a backup.

Extract this backup in ~/data/dump-preprod/. Then run the dump preprod docker./run-dumpprepprod.sh

  • Start API-Node:

    cd API-Node/
    source ./environments/dump-preprod.sh
    node server.js
    
  • Start website-backoffice:

    cd website-backoffice/
    source ./environments/dump-preprod.sh
    node server.js
    
  • Start Backoffice:

    cd Backoffice/
    source ./environments/dump-preprod.sh
    grunt serve
    
  • Start Identity Provider:

    cd identity-provider/
    source ./environments/dump-preprod.sh
    npm run start (or nodemon --harmony server/index.js)
    
:rocket: Fasten your seat belts. This is going to be a bumpy ride !
Docker Pull Command
Owner
chpradmin
Source Repository

Comments (0)