apache/spark

Sponsored OSS

By The Apache Software Foundation

Updated 2 months ago

Apache Spark

Image
Data Science
Languages & Frameworks
Machine Learning & AI

10M+

Apache Spark

Apache Spark™ is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

https://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions.

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

docker run -it apache/spark /opt/spark/bin/spark-shell

Try the following command, which should return 1,000,000,000:

scala> spark.range(1000 * 1000 * 1000).count()

Running Spark on Kubernetes

https://spark.apache.org/docs/latest/running-on-kubernetes.html

To run Python on Spark

Use the images on https://hub.docker.com/r/apache/spark-py

To run R on Spark

Use the images on https://hub.docker.com/r/apache/spark-r

Docker Pull Command

docker pull apache/spark