-
Notifications
You must be signed in to change notification settings - Fork 36
Open
Description
One of the things that makes Kotlin so great to work with, compared to other languages, is the extensive and declarative standard library functions.
Functions like mapNotNull { }
and first { a > 4 }
. To promote Kotlin for Spark, it might be helpful to bring the standard library closer to Datasets and RDD calculations.
There are multiple ways we could achieve this.
The first way is to simply convert Datasets to Iterables and Sequences:
inline fun <reified T> Dataset<T>.asSequence(): Sequence<T> = Sequence { toLocalIterator() }
inline fun <reified T> Dataset<T>.asIterable(): Iterable<T> = Iterable { toLocalIterator() }
However, I am not sure whether this would impact performance since the Spark functions like filter
, map
etc. are probably optimized.
The second option would be to copy the standard library functions for Sequences/Iterables and put them in place as extensions for Datasets and RRDs.
What do you think, @asm0dey ?
Metadata
Metadata
Assignees
Labels
No labels