apache/spark
Apache Spark
10M+
Apache Spark™ is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.
You can find the latest Spark documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions.
The easiest way to start using Spark is through the Scala shell:
docker run -it apache/spark /opt/spark/bin/spark-shell
Try the following command, which should return 1,000,000,000:
scala> spark.range(1000 * 1000 * 1000).count()
https://spark.apache.org/docs/latest/running-on-kubernetes.html
Use the images on https://hub.docker.com/r/apache/spark-py
Use the images on https://hub.docker.com/r/apache/spark-r
docker pull apache/spark