In this Spark screencast, we create a standalone Apache Spark job in Scala. In the job, we create a spark context and read a file into an RDD of strings; then apply transformations and actions to the RDD and print out the results.
Spark Screencast Archive
3 Transformations And Caching
In this third Spark screencast, we demonstrate more advanced use of RDD actions and transformations, as well as caching RDDs in memory.
Spark Documentation Overview – Screencast #2
This is our 2nd Spark screencast. In it, we take a tour of the documentation available for Spark users online.
First Steps with Spark - Screencast #1
This screencast marks the beginning of a series of hands-on screencasts we will be publishing to help new users get up and running in minutes. In this screencast, we:
- Download and build Spark on a local machine (running OS X, but should be a similar process for Linux or Unix).
- Introduce the API using the Spark interactive shell to explore a file.