-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Update tuning guide #995
Conversation
--conf spark.memory.offHeap.enabled=true \ | ||
--conf spark.memory.offHeap.size=10g \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enable unified memory management
@Kontinuation Could you review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
For example, if the executor can execute 4 plans concurrently, then the total amount of memory allocated will be | ||
`4 * spark.comet.memory.overhead.factor * spark.executor.memory`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK this is a simplified, sometimes inaccurate model for estimating the amount of memory allocated, since each stage may create multiple Comet native plans (see point 3 and the example DAG in #949), but I think it is good enough for most of the cases.
|
||
`spark.comet.exec.shuffle.mode` to `auto` will let Comet choose the best shuffle mode based on the query plan. | ||
`CometScanExec` uses nanoseconds for total scan time. Spark also measures scan time in nanoseconds but converts to | ||
milliseconds _per batch_ which can result in a large loss of precision. In one case we saw total scan time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can probably get away from exact numbers, just highlight the loss of precision can be twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made this change but looks like I failed to push this before merging the PR. I will address in my next PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it mostly looks good to me.
I dont remember exactly the reason why unified manager was not enabled by default. It was a tricky edge case if Spark decides to abort native plans so it needs more jvm memory than native. And if this happens Spark jobs will fail on OOM because pod memory occupied by native buffer which is not in use.
@sunchao if you could chime in on this matter?
Thanks for the reviews @Kontinuation @comphead @viirya. |
I don't remember any issue related to off-heap memory mode itself, but just that all the memory related configurations need to be careful tuned. For instance we may need to still reserve some JVM memory for certain operations (like broadcast?). One thing I was trying to do is to hide all these configuration changes behind the Comet driver plugin, so when user enables Comet, the existing job configurations, including |
Which issue does this PR close?
Part of #949
Rationale for this change
Provide better documentation for tuning memory usage.
Rendered version of this PR
What changes are included in this PR?
How are these changes tested?