Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Update tuning guide #995

Merged
merged 5 commits into from
Oct 3, 2024
Merged

Conversation

andygrove
Copy link
Member

@andygrove andygrove commented Oct 2, 2024

Which issue does this PR close?

Part of #949

Rationale for this change

Provide better documentation for tuning memory usage.

Rendered version of this PR

What changes are included in this PR?

How are these changes tested?

Comment on lines +60 to +61
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=10g \
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enable unified memory management

@andygrove andygrove marked this pull request as ready for review October 3, 2024 14:44
@andygrove andygrove requested review from viirya and comphead October 3, 2024 14:49
@andygrove
Copy link
Member Author

@Kontinuation Could you review?

Copy link
Member

@Kontinuation Kontinuation left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines +50 to +51
For example, if the executor can execute 4 plans concurrently, then the total amount of memory allocated will be
`4 * spark.comet.memory.overhead.factor * spark.executor.memory`.
Copy link
Member

@Kontinuation Kontinuation Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK this is a simplified, sometimes inaccurate model for estimating the amount of memory allocated, since each stage may create multiple Comet native plans (see point 3 and the example DAG in #949), but I think it is good enough for most of the cases.


`spark.comet.exec.shuffle.mode` to `auto` will let Comet choose the best shuffle mode based on the query plan.
`CometScanExec` uses nanoseconds for total scan time. Spark also measures scan time in nanoseconds but converts to
milliseconds _per batch_ which can result in a large loss of precision. In one case we saw total scan time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably get away from exact numbers, just highlight the loss of precision can be twice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this change but looks like I failed to push this before merging the PR. I will address in my next PR.

Copy link
Contributor

@comphead comphead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it mostly looks good to me.
I dont remember exactly the reason why unified manager was not enabled by default. It was a tricky edge case if Spark decides to abort native plans so it needs more jvm memory than native. And if this happens Spark jobs will fail on OOM because pod memory occupied by native buffer which is not in use.

@sunchao if you could chime in on this matter?

@andygrove
Copy link
Member Author

Thanks for the reviews @Kontinuation @comphead @viirya.

@andygrove andygrove merged commit 3413397 into apache:main Oct 3, 2024
@andygrove andygrove deleted the update-tuning-guide branch October 3, 2024 17:07
@sunchao
Copy link
Member

sunchao commented Oct 3, 2024

I don't remember any issue related to off-heap memory mode itself, but just that all the memory related configurations need to be careful tuned. For instance we may need to still reserve some JVM memory for certain operations (like broadcast?).

One thing I was trying to do is to hide all these configuration changes behind the Comet driver plugin, so when user enables Comet, the existing job configurations, including spark.executor.memory, spark.executor.memoryOverhead, etc, would be converted to offheap memory transparently. This would require some Spark-side changes, such as https://issues.apache.org/jira/browse/SPARK-46947. There is one more change I did internally to use Java reflection to overwrite certain config in Spark memory manager, because of early initialization in Spark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants