Public | Automated Build

Last pushed: 2 years ago
Short Description
An easy way to try Spark
Full Description

Apache Spark on Docker

This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker image, available at the SequenceIQ GitHub page.
The base Hadoop Docker image is also available as an official Docker image.

##Pull the image from Docker Repository

docker pull sequenceiq/spark:1.6.0

Building the image

docker build --rm -t sequenceiq/spark:1.6.0 .

Running the image

  • if using boot2docker make sure your VM has more than 2GB memory
  • in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
  • open yarn UI ports when running container
    docker run -it -p 8088:8088 -p 8042:8042 -h sandbox sequenceiq/spark:1.6.0 bash
    docker run -d -h sandbox sequenceiq/spark:1.6.0 -d


Hadoop 2.6.0 and Apache Spark v1.6.0 on Centos


There are two deploy modes that can be used to launch Spark applications on YARN.

YARN-client mode

In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1

# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()

YARN-cluster mode

In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.

Estimating Pi (yarn-cluster mode):

# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/ \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \

Estimating Pi (yarn-client mode):

# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
Docker Pull Command
Source Repository

Comments (20)
4 months ago

Hi!. I'm getting the following error: unauthorized: authentication required even if I'm logged in with my docker hub account

7 months ago

@romann :-)

I tried to use the new --squash feature - but it doesn't work, errored out.

I bet this 2 GB image size for the 1.6.0 tag is only because of obscured files in earlier layers - with the squash capability (if it worked), we would get a nicely compacted single layer.

10 months ago

Is there a way to make this image bigger? It seems unreasonably small.

10 months ago

How do I get this to work with docker-compose?

a year ago

Bump. ON the Spark 2.0 repo timeline

a year ago

Are there any plan for the Spark 2.0 official repo

a year ago

Hi thanks for the image! Running it gives me the output below. Is there a parameter I need to add?

$ docker run -it -p 8088:8088 -p 8042:8042 -h sandbox sequenceiq/spark:1.6.0 -bash
Starting sshd: [ OK ]
Starting namenodes on [sandbox]
sandbox: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-sandbox.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-sandbox.out
Starting secondary namenodes [] starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-sandbox.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-sandbox.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-sandbox.out
/bin/bash: -c: option requires an argument

a year ago

how can i install livy with this image? is there any way?

2 years ago

there's a problem with docker 1.9.1
down-grading to 1.9.0 helped

see also

2 years ago

Spark shell never runs and just hangs. I can access the HDFS web interface and do hdfs queries though.