Public | Automated Build

Last pushed: 2 years ago
Short Description
SID Mesos
Full Description

SID - Mesosphere

Getting Started

Note: when testing locally with Docker make sure that you add the following entries into your /etc/hosts file. master.mesos cassandra-dcos-node.cassandra.dcos.mesos broker-0.kafka.mesos


mkdir -p dcos && cd dcos && \
  curl -O && \
  bash ./ . && \
  source ./bin/env-setup
  • Add community packages
  dcos config prepend package.sources
  dcos config prepend package.sources
  dcos package update --validate
  • Install Chronos (scheduler)
  dcos package install chronos
  • Install Spark (map/reduce)
  dcos package install spark
  • Install Kafka
  dcos package install kafka
  • Install Cassandra (db)
  dcos package install cassandra
  • Install Spark Notebook
  dcos package install --app spark-notebook --package-version=0.0.2
  • Install HDFS
    Note: hdfs requires at least 5 slaves and spark streaming uses hdfs to store
    intermediary results.
  dcos package install hdfs


  • Start 3 Kafka brokers on the cluster
  dcos kafka broker add 0..2
  dcos kafka broker update 0..2 --options,num.partitions=6,default.replication.factor=2
  dcos kafka broker start 0..2
  • Create Cassandra table/keyspace
  dcos node ssh --master-proxy --master

  docker run -it --net=host --rm --entrypoint=/usr/bin/cqlsh spotify/cassandra cassandra-dcos-node.cassandra.dcos.mesos 9160

  CREATE KEYSPACE walker WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 2 };

  CREATE TABLE walker.grid (
  coord text,
  ts timestamp,
  nb int,
  PRIMARY KEY (coord,ts)
  • Start Producer
  dcos marathon app add producer/sid-mesos-kafka-producer.json
  • Start Consumer
  dcos marathon app add consumer/sid-mesos-kafka-consumer.json
  • Start Spark Driver
  dcos spark run --submit-args='--class WalkerApp'
  • Start Grid
  dcos marathon app add grid/sid-mesos-cassandra-grid.json
Docker Pull Command
Source Repository