-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Benchmark] Add parquet read benchmark #1371
Conversation
I'm assuming this apply to CPU-only operations, or are there CUDA kernels executed as part of this as well? |
This benchmark is entirely IO/CPU bound. There is effectively no CUDA compute - we are just transferring remote data into host memory and moving it into device memory (when the default |
Update: I've generalized this benchmark. It's easy to use with S3 storage, but is also a useful benchmark for local-storage performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @rjzamora , I've left some comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice @rjzamora, looks good. I only have a minor suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to Mads' suggestion, otherwise LGTM. Thanks @rjzamora !
Co-authored-by: Mads R. B. Kristensen <madsbk@gmail.com>
/merge |
Adds new benchmark for parquet read performance using a
LocalCUDACluster
. The user can pass in--key
and--secret
options to specify S3 credentials.E.g.
Notes:
--filesystem arrow
together with--type gpu
performs well, but depends on Add experimentalfilesystem="arrow"
support indask_cudf.read_parquet
cudf#16684