Ruby 2.3 platform for building and running applications
This repository contains the source for building various versions of
the Ruby application as a reproducible Docker image using
Users can choose between RHEL and CentOS based builder images.
The resulting image can be run using Docker.
For RHEL based image
$ s2i build https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.3/test/puma-test-app/ rhscl/ruby-23-rhel7 ruby-sample-app $ docker run -p 8080:8080 ruby-sample-app
For CentOS based image
$ s2i build https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.3/test/puma-test-app/ centos/ruby-23-centos7 ruby-sample-app $ docker run -p 8080:8080 ruby-sample-app
Accessing the application:
$ curl 127.0.0.1:8080
CentOS based Dockerfile.
RHEL based Dockerfile. In order to perform build or test actions on this
Dockerfile you need to run the action on a properly subscribed RHEL machine.
This folder contains scripts that are run by S2I:
Used to install the sources into the location where the application
will be run and prepare the application for deployment (eg. installing
modules using bundler, etc.)
This script is responsible for running the application by using the
application web server.
This script prints the usage of this image.
This folder contains a file with commonly used modules.
This folder contains a S2I
test framework with a simple Rack server.
To set these environment variables, you can place them as a key value pair into a
file inside your source code repository.
This variable specifies the environment where the Ruby application will be deployed (unless overwritten) -
Each level has different behaviors in terms of logging verbosity, error pages, ruby gem installation, etc.
Note: Application assets will be compiled only if the
RACK_ENVis set to
This variable set to
trueindicates that the asset compilation process will be skipped. Since this only takes place
when the application is run in the
productionenvironment, it should only be used when assets are already compiled.
These variables indicate the minimum and maximum threads that will be available in Puma's thread pool.
This variable indicate the number of worker processes that will be launched. See documentation on Puma's clustered mode.
Set this variable to use a custom RubyGems mirror URL to download required gem packages during build process.
In order to dynamically pick up changes made in your application source code, you need to make following steps:
For Ruby on Rails applications
Run the built Rails image with the
RAILS_ENV=developmentenvironment variable passed to the Docker
$ docker run -e RAILS_ENV=development -p 8080:8080 rails-app
For other types of Ruby applications (Sinatra, Padrino, etc.)
Your application needs to be built with one of gems that reloads the server every time changes in source code are done inside the running container. Those gems are:
Please note that in order to be able to run your application in development mode, you need to modify the S2I run script, so the web server is launched by the chosen gem, which checks for changes in the source code.
$ docker run -e RACK_ENV=development -p 8080:8080 sinatra-app
To change your source code in running container, use Docker's exec command:
docker exec -it <CONTAINER_ID> /bin/bash
After you Docker exec into the running container, your current
directory is set to
/opt/app-root/src, where the source code is located.
You can tune the number of threads per worker using the
PUMA_MAX_THREADS environment variables.
Additionally, the number of worker processes is determined by the number of CPU
cores that the container has available, as recommended by
Puma's documentation. This is determined using
the cgroup cpusets
subsystem. You can specify the cores that the container is allowed to use by passing
--cpuset-cpus parameter to the Docker run command:
$ docker run -e PUMA_MAX_THREADS=32 --cpuset-cpus='0-2,3,5' -p 8080:8080 sinatra-app
The number of workers is also limited by the memory limit that is enforced using
cgroups. The builder image assumes that you will need 50 MiB as a base and
another 15 MiB for every worker process plus 128 KiB for each thread. Note that
each worker has its own threads, so the total memory required for the whole
container is computed using the following formula:
50 + 15 * WORKERS + 0.125 * WORKERS * PUMA_MAX_THREADS
You can specify a memory limit using the
$ docker run -e PUMA_MAX_THREADS=32 --memory=300m -p 8080:8080 sinatra-app
If memory is more limiting then the number of available cores, the number of
workers is scaled down accordingly to fit the above formula. The number of
workers can also be set explicitly by setting