Public | Automated Build

Last pushed: 10 days ago
Short Description
Tarantool Docker repository
Full Description

What is Tarantool

Tarantool is a Lua application server integrated with a database
management system. It has a "fiber" model which means that many
Tarantool applications can run simultaneously on a single thread,
while the Tarantool server itself can run multiple threads for
input-output and background maintenance. It incorporates the LuaJIT --
"Just In Time" -- Lua compiler, Lua libraries for most common
applications, and the Tarantool Database Server which is an
established NoSQL DBMS. Thus Tarantool serves all the purposes that
have made node.js and Twisted popular, plus it supports data

The database API allows for permanently storing Lua objects, managing
object collections, creating or dropping secondary keys, making
changes atomically, configuring and monitoring replication, performing
controlled fail-over, and executing Lua code triggered by database
events. Remote database instances are accessible transparently via a
remote-procedure-invocation API.

For more information, visit


If you just want to quickly try out tarantool, run this command:

$ docker run --rm -t -i tarantool/tarantool:1.7

This will create a one-off Tarantool instance and open an interactive
console. From there you can either type tutorial() or follow
official documentation.

About this image

This image is a bundle containing Tarantool itself, and a combination
of lua modules and utilities often used in production. It is designed
to be a building block for modern services, and as such has made a few
design choices that set it apart from the systemd-controlled Tarantool.

First, if you take this image and pin a version, you may rely on the
fact that you won't get updates with incompatible modules. We only do
major module updates while changing the image version.

Entry-point script provided by this image uses environment variables
to configure various "external" aspects of configuration, such as
replication sources, memory limits, etc... If specified, they override
settings provided in your code. This way you can use docker-compose or
other orchestration and deployment tools to set those options.

There are a few convenience tools that make use of the fact that there
is only one Tarantool instance running in the container.

What's on board

  • avro: Apache Avro scheme for your data
  • expirationd: Automatically delete tuples based on expiration time
  • queue: Priority queues with TTL and confirmations
  • connpool: Keep a pool of connections to other Tarantool instances
  • shard: Automatically distribute data across multiple instances
  • http: Embedded HTTP server with flask-style routing support
  • curl: HTTP client based on libcurl
  • pg: Query PostgreSQL right from Tarantool
  • mysql: Query MySql right from Tarantool
  • memcached: Access Tarantool as if it was a Memcached instance
  • prometheus: Instrument code and export metrics to Prometheus monitoring
  • mqtt: Client for MQTT message brokers
  • gis: store and query geospatial data
  • gperftools: collect CPU profile to find bottlenecks in your code

If the module you need is not listed here, there is a good chance we may add it. Open an issue on our GitHub.

Data directories

  • /var/lib/tarantool is a volume containing operational data
    (snapshots, xlogs and vinyl runs)

  • /opt/tarantool is a place where users should put their lua
    application code

Convenience utilities

  • console: execute it without any arguments to open administrative
    console to a running Tarantool instance

  • tarantool_is_up: returns 0 if Tarantool has been initialized and
    is operating normally

  • tarantool_set_config.lua: allows you to dynamically change certain
    settings without the need to recreate containers.

How to use this image

Start a Tarantool instance

$ docker run --name mytarantool -p3301:3301 -d tarantool/tarantool:1.7

This will start an instance of Tarantool 1.7 and expose it on
port 3301. Note, that by default there is no password protection, so
don't expose this instance to the outside world.

In this case, when there is no lua code provided, the entry point
script initializes database using a sane set of defaults. Some of them
can be tuned with environment variables (see below).

Start a secure Tarantool instance

$ docker run --name mytarantool -p3301:3301 -e TARANTOOL_USER_NAME=myusername -e TARANTOOL_USER_PASSWORD=mysecretpassword -d tarantool/tarantool:1.7

This starts an instance of Tarantool 1.7, disables guest login and
creates user named myusername with admin privileges and password

As with the previous example, database is initialized automatically.

Connect to a running Tarantool instance

$ docker exec -t -i mytarantool console

This will open an interactive admin console on the running instance
named mytarantool. You may safely detach from it anytime, the server
will continue running.

This console doesn't require authentication, because it uses a local
unix socket in the container to connect to Tarantool. But it requires
you to have direct access to the container.

If you need a remote console via TCP/IP, use tarantoolctl utility
as explained here.

Start a master-master replica set

You may start a replica set with docker alone, but it's more
convenient to use docker-compose.
Here's a simplified docker-compose.yml for starting a master-master
replica set:

version: '2'

    image: tarantool/tarantool:1.7
      TARANTOOL_REPLICATION_SOURCE: "tarantool1,tarantool2"
      - mynet
      - "3301:3301"

    image: tarantool/tarantool:1.7
      TARANTOOL_REPLICATION_SOURCE: "tarantool1,tarantool2"
      - mynet
      - "3302:3301"

    driver: bridge

Start it like this:

$ docker-compose up

Adding application code with a volume mount

The simplest way to provide application code is to mount your code
directory to /opt/tarantool:

$ docker run --name mytarantool -p3301:3301 -d -v /path/to/my/app:/opt/tarantool tarantool/tarantool:1.7 tarantool /opt/tarantool/app.lua

Where /path/to/my/app is a host directory containing lua code. Note
that for your code to be actually run, you must execute the main script
explicitly. Hence tarantool /opt/tarantool/app.lua, assuming that your
app entry point is called app.lua.

Adding application code using container inheritance

If you want to pack and distribute an image with your code, you may
create your own Dockerfile as follows:

FROM tarantool/tarantool:1.7
COPY app.lua /opt/tarantool
CMD ["tarantool", "/opt/tarantool/app.lua"]

Please pay attention to the format of CMD: unless it is specified in
square brackets, the "wrapper" entry point that our Docker image
provides will not be called. It will lead to inability to configure
your instance using environment variables.

Environment Variables

When you run this image, you can adjust some of Tarantool settings.
Most of them either control memory/disk limits or specify external
connectivity parameters.

If you need to fine-tune specific settings not described here, you can
always inherit this container and call box.cfg{} yourself.
official documentation on box.cfg for


Setting this variable allows you to pick the name of the user that is
utilized for remote connections. By default, it is 'guest'. Please
note that since guest user in Tarantool can't have a password, it is
highly recommended that you change it.


For security reasons, it is recommended that you never leave this
variable unset. This environment variable sets the user's password for
Tarantool. In the above example, it is set to "mysecretpassword".


Optional. Specifying this variable will tell Tarantool to listen for
incoming connections on a specific port. Default is 3301.


Optional. Comma-separated list of URIs to treat as replication
sources. Upon the start, Tarantool will attempt to connect to those
instances, fetch the data snapshot and start replicating transaction
logs. In other words, it will become a slave. For the multi-master
configuration, other participating instances of Tarantool should be
started with the same TARANTOOL_REPLICATION_SOURCE. (NB: applicable
only to 1.7)




Optional. Specifies how much memory Tarantool allocates to actually
store tuples, in gigabytes. When the limit is reached, INSERT or
UPDATE requests begin failing. Default is 1.0.


Optional. Used as the multiplier for computing the sizes of memory
chunks that tuples are stored in. A lower value may result in less
wasted memory depending on the total amount of memory available and
the distribution of item sizes. Default is 1.1.


Optional. Size of the largest allocation unit in bytes. It can be
increased if it is necessary to store large tuples. Default is


Optional. Size of the smallest allocation unit, in bytes. It can be
decreased if most of the tuples are very small. Default is 16.


Optional. Specifies how often snapshots will be made, in seconds.
Default is 3600 (every 1 hour).

Reporting problems and getting help

You can report problems and request
features on our GitHub.

Alternatively you may get help on our Telegram channel.


How to contribute

Open a pull request to the master branch. A maintaner is responsible for
updating all relevant branches when merging the PR.

How to check

Say, we have updated 1.x/Dockerfile and want to check it:

$ docker build 1.x/ -t t1.x
$ docker run -it t1.x
...perform a test...

Build pipelines

Fixed versions:

Branch Dockerfile Docker tag
1.7.3 1.7/Dockerfile 1.7.3
1.7.4 1.7/Dockerfile 1.7.4
1.7.5 1.7/Dockerfile 1.7.5
1.7.6 1.7/Dockerfile 1.7.6
1.8.1 1.8/Dockerfile 1.8.1
1.9.1 1.x/Dockerfile 1.9.1
1.9.2 1.x/Dockerfile 1.9.2
1.10.0 1.x/Dockerfile 1.10.0
1.10.2 1.x/Dockerfile 1.10.2

Rolling versions:

Branch Dockerfile Docker tag
master 1.5/Dockerfile 1.5
master 1.6/Dockerfile 1.6
master 1.7/Dockerfile 1.7
master 1.x/Dockerfile 1
master 1.x/Dockerfile latest
master 2.x/Dockerfile 2

Special builds:

Branch Dockerfile Docker tag
master 1.x-centos7/Dockerfile 1.x-centos7

How to push changes (for maintainers)

When the change is about specific tarantool version or versions range, update
all relevant fixed versions & rolling versions in all relevant branches
according to the pipelines listed above.

When the change is about the environment at all, all versions need to be
updated in all relevent branches.

Add a new release (say, x.y.z). Create / update rolling versions x and
x.y in master, create fixed version x.y.z on the corresponding branch, add
corresponding build pipeline on Docker Hub. (Related.)

A maintainer is responsible to check updated images.

Docker Pull Command
Source Repository