Docker container for sparkling water stand alone cluster
To run master execute:
To run worker execute:
You can run multiple workers. Every worker would be able to find master by it's container name "spark-master".
To run spark shell against this cluster execute:
To run Run Sparkling Shell:
Sparkling Shell accepts common Spark Shell arguments.
For example, to increase memory allocated by each executor, use the spark.executor.memory parameter: bin/sparkling-shell --conf "spark.executor.memory=4g"
import org.apache.spark.h2o._ import org.apache.spark.examples.h2o._ val hc = H2OContext.getOrCreate(sc)