Public | Automated Build

Last pushed: 6 days ago
Short Description
Collect, search and visualise log data with Elasticsearch, Logstash, and Kibana.
Full Description

Elasticsearch, Logstash, Kibana (ELK) Docker image

This Docker image provides a convenient centralised log server and log management web interface, by packaging Elasticsearch, Logstash, and Kibana, collectively known as ELK.

Documentation

See the ELK Docker image documentation web page for complete instructions on how to use this image.

Docker Hub

This image is hosted on Docker Hub at https://hub.docker.com/r/sebp/elk/.

The following tags are available:

  • latest, 563: ELK 5.6.3.

  • 562: ELK 5.6.2.

  • 561: ELK 5.6.1.

  • 560: ELK 5.6.0.

  • 553: ELK 5.5.3.

  • 552: ELK 5.5.2.

  • 551: ELK 5.5.1.

  • 550: ELK 5.5.0.

  • 543: ELK 5.4.3.

  • 542: ELK 5.4.2.

  • 541: ELK 5.4.1.

  • 540: ELK 5.4.0.

  • 532: ELK 5.3.2.

  • 531: ELK 5.3.1.

  • 530: ELK 5.3.0.

  • 522: ELK 5.2.2.

  • 521: ELK 5.2.1.

  • 520: ELK 5.2.0.

  • 512: ELK 5.1.2.

  • 511: ELK 5.1.1.

  • 502: ELK 5.0.2.

  • es501_l501_k501: ELK 5.0.1.

  • es500_l500_k500: ELK 5.0.0.

  • es241_l240_k461: Elasticsearch 2.4.1, Logstash 2.4.0, and Kibana 4.6.1.

  • es240_l240_k460: Elasticsearch 2.4.0, Logstash 2.4.0, and Kibana 4.6.0.

  • es235_l234_k454: Elasticsearch 2.3.5, Logstash 2.3.4, and Kibana 4.5.4.

  • es234_l234_k453: Elasticsearch 2.3.4, Logstash 2.3.4, and Kibana 4.5.3.

  • es234_l234_k452: Elasticsearch 2.3.4, Logstash 2.3.4, and Kibana 4.5.2.

  • es233_l232_k451: Elasticsearch 2.3.3, Logstash 2.3.2, and Kibana 4.5.1.

  • es232_l232_k450: Elasticsearch 2.3.2, Logstash 2.3.2, and Kibana 4.5.0.

  • es231_l231_k450: Elasticsearch 2.3.1, Logstash 2.3.1, and Kibana 4.5.0.

  • es230_l230_k450: Elasticsearch 2.3.0, Logstash 2.3.0, and Kibana 4.5.0.

  • es221_l222_k442: Elasticsearch 2.2.1, Logstash 2.2.2, and Kibana 4.4.2.

  • es220_l222_k441: Elasticsearch 2.2.0, Logstash 2.2.2, and Kibana 4.4.1.

  • es220_l220_k440: Elasticsearch 2.2.0, Logstash 2.2.0, and Kibana 4.4.0.

  • E1L1K4: Elasticsearch 1.7.3, Logstash 1.5.5, and Kibana 4.1.2.

Note – See the documentation page for more information on pulling specific combinations of versions of Elasticsearch, Logstash and Kibana.

About

Written by Sébastien Pujadas, released under the Apache 2 license.

Docker Pull Command
Owner
sebp
Source Repository

Comments (101)
amills89
3 days ago

My issue below was related not giving docker enough RAM!

amills89
3 days ago

I have recently begin seeing issues with hitting elasticsearch when running the image...

docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk_2 sebp/elk:latest

  • Starting periodic command scheduler cron
  • Starting Elasticsearch Server
    waiting for Elasticsearch to be up (1/30)
    waiting for Elasticsearch to be up (2/30)
    waiting for Elasticsearch to be up (3/30)
    waiting for Elasticsearch to be up (4/30)
    waiting for Elasticsearch to be up (5/30)
    waiting for Elasticsearch to be up (6/30)
    waiting for Elasticsearch to be up (7/30)
    waiting for Elasticsearch to be up (8/30)
    waiting for Elasticsearch to be up (9/30)
    Waiting for Elasticsearch cluster to respond (1/30)
    logstash started.
  • Starting Kibana5
    ==> /var/log/elasticsearch/elasticsearch.log <==
    [2017-10-18T18:29:58,864][INFO ][o.e.d.DiscoveryModule ] [EreZ8Lm] using discovery type [zen]
    [2017-10-18T18:29:59,324][INFO ][o.e.n.Node ] initialized
    [2017-10-18T18:29:59,325][INFO ][o.e.n.Node ] [EreZ8Lm] starting ...
    [2017-10-18T18:29:59,851][INFO ][o.e.t.TransportService ] [EreZ8Lm] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
    [2017-10-18T18:29:59,862][INFO ][o.e.b.BootstrapChecks ] [EreZ8Lm] bound or publishing to a non-loopback or non-link-local address, enf
    [2017-10-18T18:30:00,328][INFO ][o.e.m.j.JvmGcMonitorService] [EreZ8Lm] [gc][1] overhead, spent [338ms] collecting in the last [1s]
    [2017-10-18T18:30:02,967][INFO ][o.e.c.s.ClusterService ] [EreZ8Lm] new_master {EreZ8Lm}{EreZ8LmGQL2yh6CKFSFyVA}{z-32QE2DRUOz4e6lcaoyag}
    lected-as-master ([0] nodes joined)
    [2017-10-18T18:30:03,010][INFO ][o.e.h.n.Netty4HttpServerTransport] [EreZ8Lm] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:
    [2017-10-18T18:30:03,010][INFO ][o.e.n.Node ] [EreZ8Lm] started
    [2017-10-18T18:30:03,018][INFO ][o.e.g.GatewayService ] [EreZ8Lm] recovered [0] indices into cluster_state

==> /var/log/logstash/logstash-plain.log <==

==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2017-10-18T18:30:10Z","tags":["status","plugin:kibana@5.6.3","info"],"pid":211,"state":"green","message":"Stat
revState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-18T18:30:10Z","tags":["status","plugin:elasticsearch@5.6.3","info"],"pid":211,"state":"yellow","messag
Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-18T18:30:19Z","tags":["status","plugin:elasticsearch@5.6.3","error"],"pid":211,"state":"red","message"
out after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-10-18T18:30:19Z","tags":["status","plugin:console@5.6.3","info"],"pid":211,"state":"green","message":"Sta
prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-18T18:30:19Z","tags":["status","plugin:metrics@5.6.3","info"],"pid":211,"state":"green","message":"Sta
prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-18T18:30:19Z","tags":["status","plugin:timelion@5.6.3","info"],"pid":211,"state":"green","message":"St
"prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-18T18:30:19Z","tags":["listening","info"],"pid":211,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2017-10-18T18:30:19Z","tags":["status","ui settings","error"],"pid":211,"state":"red","message":"Status change
is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-18T18:30:21Z","tags":["error","elasticsearch","admin"],"pid":211,"message":"Request error, retrying\nH
127.0.0.1:9200"}
{"type":"log","@timestamp":"2017-10-18T18:30:21Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"Unable to revive connect
{"type":"log","@timestamp":"2017-10-18T18:30:21Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"No living connections"}
{"type":"log","@timestamp":"2017-10-18T18:30:21Z","tags":["status","plugin:elasticsearch@5.6.3","error"],"pid":211,"state":"red","message"
ct to Elasticsearch at http://localhost:9200.","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2017-10-18T18:30:24Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"Unable to revive connect
{"type":"log","@timestamp":"2017-10-18T18:30:24Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"No living connections"}

==> /var/log/logstash/logstash-plain.log <==
[2017-10-18T18:30:27,139][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/mod
[2017-10-18T18:30:27,143][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modul
[2017-10-18T18:30:27,153][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/dat
[2017-10-18T18:30:27,154][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/
[2017-10-18T18:30:27,198][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"3972ce08-b189-457a
id"}

==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2017-10-18T18:30:26Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"Unable to revive connect
{"type":"log","@timestamp":"2017-10-18T18:30:26Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"No living connections"}

==> /var/log/logstash/logstash-plain.log <==
[2017-10-18T18:30:27,832][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http:/
[2017-10-18T18:30:27,838][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:h
}
[2017-10-18T18:30:27,929][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error.
tash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Mantico
n refused)"}
[2017-10-18T18:30:27,931][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :ho
[2017-10-18T18:30:28,049][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=
light"=>250}
[2017-10-18T18:30:28,694][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-10-18T18:30:28,764][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-18T18:30:28,837][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2017-10-18T18:30:28,994][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2017-10-18T18:30:29Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"Unable to revive connect
{"type":"log","@timestamp":"2017-10-18T18:30:29Z","tags":["warning","elasticsearch","admin"],"pid":211,"message":"No living connections"}

It will continue this repeated behavior forever without elastic ever being hit successfully. I also can successfully run the image below as well (es241_l240_k461). But anything recent gives me the same issue.

jerommeke4
3 days ago

In most recent builds I get a timeout on the 'waiting for elasticsearch to start' section. After 30 seconds it fails and the log could not be retrieved. Is this something known? Older versions (like es241_l240_k461) do work.

Jeroens-MacBook-Pro:~ jeroenvanderlaan$ docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk-test sebp/elk:562
 * Starting periodic command scheduler cron                                                                                                                                                        [ OK ]
 * Starting Elasticsearch Server                                                                                                                                                                   [fail]
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
waiting for Elasticsearch to be up (11/30)
waiting for Elasticsearch to be up (12/30)
waiting for Elasticsearch to be up (13/30)
waiting for Elasticsearch to be up (14/30)
waiting for Elasticsearch to be up (15/30)
waiting for Elasticsearch to be up (16/30)
waiting for Elasticsearch to be up (17/30)
waiting for Elasticsearch to be up (18/30)
waiting for Elasticsearch to be up (19/30)
waiting for Elasticsearch to be up (20/30)
waiting for Elasticsearch to be up (21/30)
waiting for Elasticsearch to be up (22/30)
waiting for Elasticsearch to be up (23/30)
waiting for Elasticsearch to be up (24/30)
waiting for Elasticsearch to be up (25/30)
waiting for Elasticsearch to be up (26/30)
waiting for Elasticsearch to be up (27/30)
waiting for Elasticsearch to be up (28/30)
waiting for Elasticsearch to be up (29/30)
waiting for Elasticsearch to be up (30/30)
Couln't start Elasticsearch. Exiting.
Elasticsearch log follows below.
cat: /var/log/elasticsearch/elasticsearch.log: No such file or directory
sebp
a month ago

@aryeetey You're most likely running Docker behind a proxy that needs you to authenticate, see e.g. https://docs.docker.com/engine/admin/systemd/#httphttps-proxy

aryeetey
a month ago

hi I am trying to use elk in my project but anytime I do mvn integration-test the docker pull sebp/elk keeps giving me this error: io.fabric8.maven.docker.access.DockerAccessException: Unable to pull 'sebp/elk:latest' : unauthorized: authentication required

brianjill
a month ago

Hi,

I follow the Updating Logstash's configuration from https://elk-docker.readthedocs.io/

here is my Dockerfile

FROM sebp/elk

# overwrite existing file
ADD ./conf/logstash/30-output.conf /etc/logstash/conf.d/30-output.conf

# add new file
ADD ./conf/logstash/12-sa_full.conf /etc/logstash/conf.d/12-sa_full.conf
ADD ./conf/logstash/13-tomcat7-stderr.conf /etc/logstash/conf.d/13-tomcat7-stderr.conf
ADD ./conf/logstash/14-tomcat7-stdout.conf /etc/logstash/conf.d/14-tomcat7-stdout.conf
ADD ./conf/logstash/15-sa-log.conf /etc/logstash/conf.d/15-sa-log.conf
ADD ./conf/logstash/29-logstash.conf /etc/logstash/conf.d/29-logstash.conf

in the directory where my Dockfile resides, I execute

docker build .
docker-compose up

but the indeces in kibana is still from old 12-sa_full.conf. The new indeces i specified in the updated 12-sa_full.conf
is not showing up in kibana after filebeat harvest the new log file
please help

sebp
2 months ago

@juneyoungoh Logstash's Beats input interface is bound to port 5044, what is bound to port 9600 (by default) is Logstash's API (used e.g. for monitoring).

@mnhmilu Try later, you're probably having temporary network connectivity issues.

mnhmilu
2 months ago

Can not pull , getting this. Please help.
13bd660f: Pulling fs layer
error pulling image configuration: Get https://registry-1.docker.io/v2/sebp/elk/blobs/sha256:541bfb37f314989d75897461442cc1390285cad1ca4ab5b8e0d22f6a213e47ba: net/http: TLS handshake timeout
root@default:~#

juneyoungoh
2 months ago

Thanks so much for this image. I got a question though, why logstash bound in port 9600? I could not find such port setting in /etc/logstash/conf.d. I thought 5044 port is for logstash... Where can I find port setting for logstash?

  • 9200 for elastic, 5601 for kibana, 5044 for logstash right?
winwin
4 months ago

Thank you very much Sébastien Pujadas.
@jaegerbane - Thank you. I was stuck at sending dummy log entry.