Public Repository

Last pushed: 2 years ago
Short Description
This is a logstash v1.4.2 image that can be run using embedded elasticsearch or as a node running in separate container. Is similar to pblittle/docker-logstash, but override files can be on a volume.
Full Description

This is a logstash (1.4.2) image that is configurable to run using either the embedded elasticsearch or an elasticsearch node running in a separate container. It is very similar to (and based on the Dockerfile) of pblittle/docker-logstash - but with some differences, of course.

Override files can be retrieved from a Docker volume instead of only from an HTTP web server. It makes use of curl instead of wget to retrieve an override logstash.conf file and the ssl cert and key files. The curl tool supports a URL of this manner (wget does not):


So now we can make use of a Docker volume to more conveniently place override files. Here is a typical way to run this LogStash container:

sudo docker run -d \
        --volumes-from data-vol-creator \
        -v /data/elasticsearch \
        -e LOGSTASH_CONFIG_URL=file:///shared-data/logstash.conf \
        -e LF_SSL_CERT_URL=file:///shared-data/logstash-forwarder.crt \
        -e LF_SSL_CERT_KEY_URL=file:///shared-data/logstash-forwarder.key \
        -p 514:514 \
        -p 5043:5043 \
        -p 9200:9200 \
        -p 9292:9292 \
        --name logstash \

The /shared-data volume is being established by the container data-vol-creator. The /shared-data volume is also mounted into the file system of the host docker daemon so that files can easily be modified and placed there via the docker@boot2docker login. On Windows or Mac OS X where are using Boot2Docker, the svendowideit/samba container can be used to mount the volume for docker@boot2docker access. Here are the steps to establish the shared data volume:

# Run image to create shared volume:
sudo docker run --name data-vol-creator -d -v /shared-data ubuntu /bin/bash

# Run samba image to publish volume (where 'data-vol-creator' is name of the volume-creator running container):
sudo docker run --rm -v $(which docker):/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba data-vol-creator

# set the IP addresss of the running samba server container into shell environment variable
SAMBA_SRV_IP=`sudo docker inspect -f '{{.NetworkSettings.IPAddress}}' samba-server`

# Boot2Docker - mount shared volume in TinyCore64 host (need to determine the IP to use for the samba server):
sudo mkdir /mnt/shared-data &>/dev/null
sudo mount -t cifs //$SAMBA_SRV_IP/shared-data /mnt/shared-data -o username=guest

# Now run image to access shared volume (where 'data-vol-creator' is name of the volume-creator running container):
sudo docker run --name data-vol-sharer --rm -i -t --volumes-from data-vol-creator ubuntu /bin/bash

NOTE: the IP address referenced here will need to be the IP address of the samba server container.

The exposed 5043 port is what the LogStash-forwarder on the client side should connect to. It is an SSL connection and will require both sides are configured with ssl cert and key files.

The Kibana web dashboard console can connect to exposed port 9292.

Then 9200 is the Elasticsearch port and 514 is syslog.

The Dockerfile specifies these two directory paths as volumes:

VOLUME ["/shared-data", "/data/elasticsearch"]

By default, when running Elasticsearch in LogStash embedded mode, Elasticsearch will store data at:


This directory path was designated a volume so that it will not be included in image commits.

It is possible to run Elasticsearch as a separate process or container. Refer to the pblittle/docker-logstash container for information on doing that - it is still supported in this derivative container.

Errata: This container image was based on ubuntu:14.04. Updates were applied, curl installed, openjdk 7 headless installed, and LogStash 1.4.2.

Docker Pull Command