Skip to content
Ondřej Moravčík edited this page Mar 25, 2015 · 16 revisions

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. There are two ways to create RDDs: parallelizing an existing collection in your driver program, or referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.

For currently supported methods check

Examples

Sum of numbers

$sc.parallelize(0..10).sum
# => 55

Words count

rdd = $sc.text_file(PATH)

rdd = rdd.flat_map(lambda{|line| line.split})
         .map(lambda{|word| [word, 1]})
         .reduce_by_key(lambda{|a, b| a+b})

rdd.collect_as_hash

To string

rdd = $sc.parallelize(0..10)
rdd.map(:to_s).collect

Bind object

replace_to = '***'

def replacing(word)
  if word =~ /[0-9]+/
    replace_to
  else
    word
  end
end

rdd = $sc.text_file('text.txt')
rdd = rdd.flat_map(lambda{|line| line.split})
rdd = rdd.map(method(:replacing))
rdd = rdd.bind(replace_to: replace_to)
rdd.collect

Caching

rdd = $sc.parallelize(0...5, 1).map(lambda{|x| sleep(1); x})

rdd.collect # waiting 5s

rdd.cache

rdd.collect # waiting 5s
rdd.collect # waiting 0s
Clone this wiki locally