-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EPIC] Improve shuffle performance #1123
Comments
Do you mean when If so, it makes some sense because Comet shuffle reader is still JVM based. We should make it native eventually to boost shuffle reading performance. It is on the early roadmap when we started on Comet shuffle. Although it is not super urgent and high priority at that moment. But now I think it is the time we can begin to work on this. Opened: #1125 |
Yes, exactly. |
This is also an issue for shuffle writes. The child node of We pay the FFI cost twice - once to import from native plan to JVM, then again to export to the shuffle write native plan. We have the cost of schema serde in both directions. Perhaps there is a way to shortcut this and avoid a full serde because we do not need to read the batch in the JVM in this case, just pass it from one native plan to another. |
I created a Google document for collaborating on ideas around improving shuffle performance: https://docs.google.com/document/d/1rx1ue7UZ4ljzic9Rc2kT-v35bfLB7Rhhe5FW1d0Sw4I/edit?usp=sharing |
This epic is for improving shuffle / ScanExec performance.
Issues
Context
I have been comparing Comet and Ballista performance for TPC-H q3. Both execute similar native plans. I am using the
comet-parquet-exec
branch which uses DataFusion'sParquetExec
.Ballista is approximately 3x faster than Comet. Given that they are executing similar DataFusion native plans, I would expect performance to be similar.
The main difference between Comet and Ballista is that Comet transfers batches between JVM and native code during shuffle operations.
Most of the native execution time in Comet is spent in
ScanExec
which is reading Arrow batches from the JVM using Arrow FFI. This time was not included in our metrics prior to #1128 and #1111.The text was updated successfully, but these errors were encountered: