-
Notifications
You must be signed in to change notification settings - Fork 29
Ondřej Moravčík edited this page Mar 27, 2015
·
16 revisions
Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. There are two ways to create RDDs: parallelizing an existing collection in your driver program, or referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat.
For currently supported methods check
RDD methods are also available in lower camel case format.
rdd.flat_map(...)
rdd.flatMap(...)
$sc.parallelize(0..10).sum
# => 55
rdd = $sc.text_file(PATH)
rdd = rdd.flat_map(lambda{|line| line.split})
.map(lambda{|word| [word, 1]})
.reduce_by_key(lambda{|a, b| a+b})
rdd.collect_as_hash
rdd = $sc.parallelize(0..10)
rdd.map(:to_s).collect
replace_to = '***'
def replacing(word)
if word =~ /[0-9]+/
replace_to
else
word
end
end
rdd = $sc.text_file('text.txt')
rdd = rdd.flat_map(lambda{|line| line.split})
rdd = rdd.map(method(:replacing))
rdd = rdd.bind(replace_to: replace_to)
rdd.collect
rdd = $sc.parallelize(0...5, 1).map(lambda{|x| sleep(1); x})
rdd.collect # waiting 5s
rdd.cache
rdd.collect # waiting 5s
rdd.collect # waiting 0s