Unless you already have a working Apache Spark cluster, you will need to have Docker for simple environment setup.
The provided docker-compose.yml
and Spark configurations in conf
directory are cloned from https://github.com/gettyimages/docker-spark.
- Make sure Docker is installed properly and
docker-compose
is ready to use - Run
$ docker-compose up -d
under thedata-mr
directory - Check Spark UI at
http://localhost:8080
and you should see 1 master and 1 worker - Run
$ docker exec -it datamr_master_1 /bin/bash
to get into the container shell, and start utilizing Spark commands such as# spark-shell
,# pyspark
or# spark-submit
. You may want to replacedatamr_master_1
with actual container name that's spawned by thedocker-compose
process
If you're not already familiar with Apache Spark, you'll need to go through its documentation for available APIs. The version that comes with the Docker Spark setup depends on https://github.com/gettyimages/docker-spark.
For jobs that rely on external dependencies and libraries, make sure they are properly packaged on submission.
On submission, we will need:
- Source code of the solution
- Build instructions for job packaging (unless your solution is a single
.py
), such as Maven or SBT for Scala/Java, orsetup.py
for Python.zip/.egg
Make sure the jobs can be submitted (through spark-submit
command) in the Spark Master container shell. There is a data
directory provided that maps between the Spark Master container and your host system, which is accessible as /tmp/data
within the Docker container -- this is where you want to place both your jobs and work sample data, the latter is already included.