Public | Automated Build

Last pushed: 10 days ago
Short Description
Apache Kafka Image
Full Description

Kafka docker

Docker image containing a standard Kafka distribution.

Currently supported versions:

  • Kafka for Scala 2.11 and Java 1.8.0_72
  • Kafka for Scala 2.11 and Java 1.8.0_72
  • Kafka for Scala 2.11 and Java 1.8.0_72

Image details:

  • Installation directory: /usr/local/apache-kafka/current

Kafka Docker image

To start the Kafka Docker image:

docker run -i -t bde2020/kafka /bin/bash

To build the Kafka Docker image:

git clone
docker build -t bde2020/kafka .

To start a Kafka Server inside this Docker image

  • update at least zookeeper.connect in /usr/local/apache-kafka/current/config/ to point to your zookeeper installation. a chroot can be used and will be created upon startup, e.g. zookeeper.connect=,
  • run the following commands

    cd /usr/local/apache-kafka/current
    ./bin/ ./config/
  • note that it is possible to override any setting using the --override command line argument in case a hardcoded properties file is not desired.

    cd /usr/local/apache-kafka/current
    ./bin/ ./config/ \
    --override zookeeper.connect=,
  • note that depending on the environment this image is used in, it might be necessary to change override and override parameter. this can be achieved with above --override command line argument.
  • for the complete documentation on available parameters refer to this document:

To start Kafka Docker image on Marathon:

  • Create a Marathon Application Setup in json like the one below, store it in a file (e.g. marathon-kafka.json) and post it to Marathon's v2/app endpoint.

      "container": {
          "type": "DOCKER",
          "volumes": [
                  "containerPath": "/tmp/kafka-logs",
                  "hostPath": "/var/lib/bde/kafka-logs",
                  "mode": "RW"
          "docker": {
              "network": "BRIDGE",
              "image": "bde2020/docker-kafka",
              "portMappings": [
                { "containerPort": 9092, "hostPort": 9092}
      "cpus": 0.2,
      "mem": 512,
      "cmd": "cd /usr/local/apache-kafka/current && ./bin/ ./config/ --override zookeeper.connect=,, --override delete.topic.enable=true --override$HOST --override advertised.port=$PORT0",
  • note that 9092 is the default port for Kafka brokers. For the above example it is necessary that mesos is configured to use this port range. see for details.

  • note that in the above example Kafka's default log directory is mounted on the host. It is dependent on the specific use case if this is necessary or not.
  • note that the above example configures the docker image to run network in bridge mode resulting in the fact that Kafka brokers will allways (also after restart) be available at host.url:9092. for this to work properly it is necessary to override and in the Kafka Server startup command. If the above json is run inside marathon on the Kafka broker will be available at If the Mesos cluster contained a second slave, e.g., the second Kafka broker would be available as upon scaling in Marathon, resulting in a consistent and forseeable deployment.

To create a Kafka topic:

  • note that the following example assumes that the Kafka Docker image is deployed using Marathon like above and scaled to three servers,, and (you guessed it)
  • log into the Kafka Docker image on one of these servers by issueing

    docker ps

    on one of the hosts. This will expose the containerId of the running bde2020/docker-kafka container, e.g. 8b797c0d80b3.

    docker exec -t -i 8b797c0d80b3 /bin/bash

    Inside the docker container cd into /usr/local/apache-kafka/current. Issue the following command to see available options for topic creation.

    ./bin/ --help
  • note that the following options are required to create a topic: --zookeeper --partitions --replication-factor --topic --create. the below command creates a sample topic with 3 partitions and a replication-factor of one. note that the zookeeper url needs to be adapted to match the local installation. the zookeeper chroot is free to choose.

    ./bin/ --create --topic sampleTopic \
    --zookeeper \
    --partitions 3 \
    --replication-factor 1
  • the topic will now show up with the list topics command

    ./bin/ --list --zookeeper
  • can be used to create some sample messages. start the producer by issueing the following command. note that this requires a running broker, which is available at the host's hostname and port 9092 if the above guidelines have been followed, the actual url of the Kafka broker needs to be adapted to the local environment.

    ./bin/ --topic sampleTopic --broker-list

    After starting the producer simply type in some messages in the console and hit enter after every single message, we will consume these messages in the next step. Hit ctrl-c to stop the producer.

  • can be used to consume messages. Issue the following command to consume the messages created in the previous step. Again update the zookeeper url and the bootstrap-server url to the local environment.

    ./bin/ --topic sampleTopic \
    --zookeeper \
    --bootstrap-server bigdata-one.example:9092 --from-beginning
  • for further details on the above examples refer to

  • note that you'll find Kafka's log directory on the hosting machines, if the Marathon setup from above has been used. checkout: /var/lib/bde/kafka-logs/ where there will be a directory like sampleTopic-{integer} depending on the partition the host has been assigned.

To run the Kafka Image on the BDE Platform:

  • To run easily integrated with the BDE Platform this image provides the possibility to startup Apache Kafka and create a Kafka topic (in case it doesn't exist) using json config files.
  • These json files correspond to the many options that are available from Apache Kafka and are translated to Apache Kafka shell commands on the fly.
  • To run the Apache Kafka image using the BDE Platform, it is necessary to extend this image, adding to json files (see below) to the /config directory.

    • The Dockerfile

      FROM bde2020/kafka
      ADD kafka-startup.json /config/
      ADD kafka-init.json /config/
    • The kafka-startup.json (example, note that it is possible to override any option using the below template)

    • The kafka-init.json (example, the options are dependent on the use case)

  • To startup the image

  • To run a Apache Kafka Cluster on docker-swarm

    • Setup a Zookeeper Cluster (
    • Follow above instructions on extending the Apache Kafka Base Image
    • Find a full example of an extension here :
    • Find a full example here :
    • Create the following docker-compose snippet within your docker-compose.yml
        image: "your/kafka-extension"
          - your_zookeeper_1
          - your_zookeeper_2
          - your_zookeeper_3
          - 9092:9092
        command: "bash -c /app/bin/kafka-init"
        hostname: "your-kafka-1"    
        image: "your/kafka-extension"
          - your_zookeeper_1
          - your_zookeeper_2
          - your_zookeeper_3
          - 9092:9092
        command: "bash -c /app/bin/kafka-init"
        hostname: "your-kafka-2"
        image: "your/kafka-extension"
          - your_zookeeper_1
          - your_zookeeper_2
          - your_zookeeper_3
          - 9092:9092
        command: "bash -c /app/bin/kafka-init"
        hostname: "your-kafka-3"
Docker Pull Command
Source Repository

Comments (0)