-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-11082][YARN] Fix wrong core number when response vcore is less than requested vcore #9095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #43639 has finished for PR 9095 at commit
|
|
@srowen , not sure what exactly you mean? From what I know in |
|
Gotcha. This is probably my ignorance/misunderstanding then. As long as this is the only place the fact that the requested amount wasn't the same as the granted amount. Does the same thing happen with memory? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srowen , we already had such defensive code for memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, I mean, should we expect there is a case where the granted memory is less than requested as well? and allow or handle it? right now it's rejected, so I expect it can't happen. But then again the code seemed to assume that (sort of) about vcores too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure from Yarn side the granted memory will possibly be less than the requested, I haven't met such problem yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
memory should never be less then requested. vcores support was added later though and if its configured off or the scheduler doesn't support it then its possible to get back less. Like mentioned the defaultResourceCalculator just always returns 1.
There is already a comment on this at matchContainerToRequest. Is this actually failing or you were just surprised at what you got?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will not fail, just made me quite confused when looking at the cores I set is different from what displayed in yarn side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that wrong vcore accounting in Yarn can affect system integrity due to over-scheduling the CPU. It is mandatory to have this working correctly if spark has to play a nice citizen on yarn (together with other scheduled apps or itself).
|
So actually against this change. It breaks backwards compatibility and I think the current behavior is what we want. @jerryshao why do you think this is a problem? If YARN doesn't schedule for cores then the options are to limit it to what it gives you (which is 1 simply as a default since it isn't managing them) or allow SPARK to go ahead and use what the user asked for. The way it is now (without this patch) it allows spark to use more then 1 since the scheduler can't schedule them. Its up to the user to do something reasonable. Otherwise there is no way to allow spark to use more then 1 core with the DefaultResourceCalculator which I think would be a limitation. |
|
But from yarn's side actually only allocated 1 vcores, whereas in the driver side, it notified with more than 1 cores when executor get registered, this is not consistent and break the semantic of "resource", driver will schedule more than 1 tasks to this executor simultaneously, but the actual parallelism is only 1. |
|
If user want to set executor cores more than 1, user should choose dominant scheduler calculator, that will keep consistent both in spark and yarn side. |
|
Actually YARN doesn't allocate any. The only reason it reports 1 is because cpu scheduling is disabled and its trying to return something reasonable.YARN does not limit you to 1 core. |
|
Sometimes its not up to the user what scheduler they user. Like in our case cluster admins choose what its running and users just use it. They have to use whatever scheduler is provided. If the cluster admins want to enforce cpu usage then they need to enable cpu scheduling. If cpu scheduling isn't on then they have to go smack users that abuse it. |
|
Yeah, I get it, thanks a lot for your explanation, still from user' point, it may easily get confused, maybe we should document this difference. |
|
There's related discussion about this in https://issues.apache.org/jira/browse/SPARK-6050 and the respective PR (#4818). |
|
yes its really more a YARN problem then a SPARK problem. Ideal the YARN side wouldn't show cores at all if you aren't using a scheduler that does cores, but that is kind of hard because you can write your own scheduler that does anything. I'm fine with documenting but if you look at the running on yarn page it already has the below under important notes: Whether core requests are honored in scheduling decisions depends on which scheduler is in use and how it is configured. If you have ideas on making that documentation better I'm fine with it. |
|
Thanks a lot @tgravescs and @vanzin , looks like it is an intention to do such way, greatly appreciate your explanation, I will close it. |
|
@jdesmet you did not understand what this PR was about. Nothing you're saying is affected by this PR. Accounting of core usage in YARN is not changed. Please read the whole discussion and linked PRs to understand why this doesn't affect any accounting at all. |
|
@vanzin Humbly, I think I understood what this PR was about. I probably (still) do not understand some of the reasoning as to why we can't report the correct vCores even if the default resource calculator does not support it (ignores it), and vCores is not used. The thread seemed to suggest it is possible, and was actually attempted in some modifications that were undone. Don't take it as I am saying it's wrong, it is probably just that you have a better understanding of it. However nothing against documenting it further? Also as to confirm - making sure I am not misunderstanding anything, as per the threads and documentation, to get it to work based on vCore resource allocation, following steps need to be accomplished:
? Probably we need to file a bug to get the hadoop documentation fixed from |
@jdesmet Spark is not reporting anything, and that's the part you are confused about. YARN does all its accounting correctly. If Spark were able to influence YARN's accounting, that would be a huge bug in YARN. |
|
However, memory reported in yarn ui on the containers seems to largely match with what I declared to use for the spark executors. Also capacity scheduler does have the option to use a resource calculator capable of accounting for cpu utilization. That makes me to (wrongly?) assume that capacity scheduler can take into account (measured?) memory and CPU utilization. Sent from my iPhone
|
|
@jdesmet , by default if cpu scheduling is not enabled in yarn, what you saw on yarn's web UI about vcore usage (1 per container) is actually meaningless, I think that makes you confuse because what you specified is a different number, but there in yarn it only shows 1 core. This is only a yarn ui issue that is quite misleading if cpu scheduling is not enabled, internally in yarn's scheduling all the resource accounting is correct. |
This should be guarded out and use response vcore number, this will be happened when use
DefaultResourceCalculatorin capacity scheduler by default.