-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support setting driver/executor memory
and memoryLimit
separately
#1489
Comments
+1 |
any progress on this issue? |
Also interesting about progress |
+1 |
2 similar comments
+1 |
+1 |
we faced the same issue, and implemented a basic the downside is that we had to manually update the existing CRDs to include this value. feel free to implement the changes properly to make sure the CRDs also change if you need it |
+1, is there any solution without fork? |
@jiakin888 , you may try this new feature that defines the pod template for driver and executor #2141 |
Unfortunately,it not work @missedone |
Spark's kubernetes executor uses the same value for memory request and memory limit, and the current operator API matches that: although we have both
cores
andcoreLimit
, there is onlymemory
.However, in some cases, it can be useful to set a memory request that is lower than the memory limit, as a form of over-subscription, as Spark tasks will not always use all of the requested memory, and it can help increase overall cluster memory utilization.
In extreme cases, this could also be seen as a counter-part to how you can disable memory enforcement in YARN clusters, although kubernetes would give us here the opportunity to tune this much more finely than just enabling or disabling memory enforcement.
My assumption is that the operator could support different limit and request values even though Spark doesn't through the mutating webhook.
The text was updated successfully, but these errors were encountered: