Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support setting driver/executor memory and memoryLimit separately #1489

Open
peay opened this issue Feb 25, 2022 · 10 comments
Open

Support setting driver/executor memory and memoryLimit separately #1489

peay opened this issue Feb 25, 2022 · 10 comments

Comments

@peay
Copy link

peay commented Feb 25, 2022

Spark's kubernetes executor uses the same value for memory request and memory limit, and the current operator API matches that: although we have both cores and coreLimit, there is only memory.

However, in some cases, it can be useful to set a memory request that is lower than the memory limit, as a form of over-subscription, as Spark tasks will not always use all of the requested memory, and it can help increase overall cluster memory utilization.

In extreme cases, this could also be seen as a counter-part to how you can disable memory enforcement in YARN clusters, although kubernetes would give us here the opportunity to tune this much more finely than just enabling or disabling memory enforcement.

My assumption is that the operator could support different limit and request values even though Spark doesn't through the mutating webhook.

@pycgo
Copy link

pycgo commented Jul 5, 2023

+1

@harryzhang2016
Copy link

any progress on this issue?

@mk-raven
Copy link

Also interesting about progress

@filippomascolo
Copy link

+1

2 similar comments
@TheDevilDan
Copy link

+1

@catinwarmhands
Copy link

+1

@sergeykranga
Copy link

sergeykranga commented Oct 22, 2024

we faced the same issue, and implemented a basic memoryLimit functionality in our fork, which we now run in our production cluster. if you are interested, here are the changes: master...extole:spark-operator:v1beta2-1.6.1-3.5.0

the downside is that we had to manually update the existing CRDs to include this value. feel free to implement the changes properly to make sure the CRDs also change if you need it

@jiakin888
Copy link

+1, is there any solution without fork?

@missedone
Copy link
Contributor

@jiakin888 , you may try this new feature that defines the pod template for driver and executor #2141

@jiakin888
Copy link

jiakin888 commented Nov 22, 2024

@jiakin888 , you may try this new feature that defines the pod template for driver and executor #2141

Unfortunately,it not work @missedone

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants