Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use available cores in k8s cluster local mode #610

Merged
merged 1 commit into from
Oct 9, 2019

Conversation

onursatici
Copy link

Upstream SPARK-XXXXX ticket and PR link (if not applicable, explain)

k8s cluster local mode is currently only on our fork

What changes were proposed in this pull request?

use all available cores from Runtime.getRuntime.availableCores in local k8s submission. This will make the cluster local submission to k8s behave the same as k8s cluster submission.

Background:
Currently even when k8s cpu limits are set, java 8 spark applications will fail to derive the correct number of limits (https://bugs.openjdk.java.net/browse/JDK-6515172).
On k8s cluster submission, spark driver cpu overrides will only change the cpu requests, and optionally there is a spark override to set cpu limits, but these will only take effect when running with java 9+
When running with java 9+, with this PR, cpu limits set in k8s will make availableCores return the limit, and the local backend will use the correct number of threads.

@onursatici onursatici merged commit 8c143b5 into master Oct 9, 2019
@onursatici onursatici deleted the os/local-spark-cores branch October 9, 2019 12:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants