Skip to content

Conversation

@jerryshao
Copy link
Contributor

This should be guarded out and use response vcore number, this will be happened when use DefaultResourceCalculator in capacity scheduler by default.

@SparkQA
Copy link

SparkQA commented Oct 13, 2015

Test build #43639 has finished for PR 9095 at commit 5fb7413.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member

srowen commented Oct 13, 2015

CC @sryza @vanzin seems reasonable to make sure it's actually allocating what YARN said it could?

Is this really the extent of the assumption though? it seems like Spark is otherwise, elsewhere assuming the number of cores it wanted was the number of cores it got.

@jerryshao
Copy link
Contributor Author

@srowen , not sure what exactly you mean?

From what I know in CoarseGrainedSchedulerBackend, it will manage the executors with cores available, this number of cores is reported by executor when get launched and registered in driver. And executor gets the number of cores through argument specified in launching command, if we specify the wrong cores, driver will also get the wrong cores, that will be different from what we see in the cluster manager's aspect.

@srowen
Copy link
Member

srowen commented Oct 13, 2015

Gotcha. This is probably my ignorance/misunderstanding then. As long as this is the only place the fact that the requested amount wasn't the same as the granted amount.

Does the same thing happen with memory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srowen , we already had such defensive code for memory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, I mean, should we expect there is a case where the granted memory is less than requested as well? and allow or handle it? right now it's rejected, so I expect it can't happen. But then again the code seemed to assume that (sort of) about vcores too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure from Yarn side the granted memory will possibly be less than the requested, I haven't met such problem yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

memory should never be less then requested. vcores support was added later though and if its configured off or the scheduler doesn't support it then its possible to get back less. Like mentioned the defaultResourceCalculator just always returns 1.

There is already a comment on this at matchContainerToRequest. Is this actually failing or you were just surprised at what you got?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will not fail, just made me quite confused when looking at the cores I set is different from what displayed in yarn side.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that wrong vcore accounting in Yarn can affect system integrity due to over-scheduling the CPU. It is mandatory to have this working correctly if spark has to play a nice citizen on yarn (together with other scheduled apps or itself).

@tgravescs
Copy link
Contributor

So actually against this change. It breaks backwards compatibility and I think the current behavior is what we want.

@jerryshao why do you think this is a problem?

If YARN doesn't schedule for cores then the options are to limit it to what it gives you (which is 1 simply as a default since it isn't managing them) or allow SPARK to go ahead and use what the user asked for. The way it is now (without this patch) it allows spark to use more then 1 since the scheduler can't schedule them. Its up to the user to do something reasonable. Otherwise there is no way to allow spark to use more then 1 core with the DefaultResourceCalculator which I think would be a limitation.

@jerryshao
Copy link
Contributor Author

But from yarn's side actually only allocated 1 vcores, whereas in the driver side, it notified with more than 1 cores when executor get registered, this is not consistent and break the semantic of "resource", driver will schedule more than 1 tasks to this executor simultaneously, but the actual parallelism is only 1.

@jerryshao
Copy link
Contributor Author

If user want to set executor cores more than 1, user should choose dominant scheduler calculator, that will keep consistent both in spark and yarn side.

@tgravescs
Copy link
Contributor

Actually YARN doesn't allocate any. The only reason it reports 1 is because cpu scheduling is disabled and its trying to return something reasonable.YARN does not limit you to 1 core.
Before the cpu scheduler was available this is the only way to get more then 1 core for your application and if you are on an older version of hadoop you didn't have the cpu scheduler as an option. Basically if yarn isn't managing then its up to the user to do something reasonable with that resource.

@tgravescs
Copy link
Contributor

Sometimes its not up to the user what scheduler they user. Like in our case cluster admins choose what its running and users just use it. They have to use whatever scheduler is provided. If the cluster admins want to enforce cpu usage then they need to enable cpu scheduling. If cpu scheduling isn't on then they have to go smack users that abuse it.

@jerryshao
Copy link
Contributor Author

Yeah, I get it, thanks a lot for your explanation, still from user' point, it may easily get confused, maybe we should document this difference.

@vanzin
Copy link
Contributor

vanzin commented Oct 13, 2015

There's related discussion about this in https://issues.apache.org/jira/browse/SPARK-6050 and the respective PR (#4818).

@tgravescs
Copy link
Contributor

yes its really more a YARN problem then a SPARK problem. Ideal the YARN side wouldn't show cores at all if you aren't using a scheduler that does cores, but that is kind of hard because you can write your own scheduler that does anything.

I'm fine with documenting but if you look at the running on yarn page it already has the below under important notes:

Whether core requests are honored in scheduling decisions depends on which scheduler is in use and how it is configured.

If you have ideas on making that documentation better I'm fine with it.

@jerryshao
Copy link
Contributor Author

Thanks a lot @tgravescs and @vanzin , looks like it is an intention to do such way, greatly appreciate your explanation, I will close it.

@jerryshao jerryshao closed this Oct 14, 2015
@jerryshao jerryshao deleted the SPARK-11082 branch October 14, 2015 01:22
@vanzin
Copy link
Contributor

vanzin commented Apr 30, 2016

@jdesmet you did not understand what this PR was about. Nothing you're saying is affected by this PR. Accounting of core usage in YARN is not changed. Please read the whole discussion and linked PRs to understand why this doesn't affect any accounting at all.

@jdesmet
Copy link

jdesmet commented May 1, 2016

@vanzin Humbly, I think I understood what this PR was about. I probably (still) do not understand some of the reasoning as to why we can't report the correct vCores even if the default resource calculator does not support it (ignores it), and vCores is not used. The thread seemed to suggest it is possible, and was actually attempted in some modifications that were undone. Don't take it as I am saying it's wrong, it is probably just that you have a better understanding of it. However nothing against documenting it further?

Also as to confirm - making sure I am not misunderstanding anything, as per the threads and documentation, to get it to work based on vCore resource allocation, following steps need to be accomplished:

  1. Use the CapacityScheduler: in conf/yarn-site.xml, set yarn.resourcemanager.scheduler.class to org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  2. Modify the resource-calculator, to one that supports using vCores: set yarn.scheduler.capacity.resource-calculator to org.apache.hadoop.yarn.util.resource.DominantResourceCalculator.

?

Probably we need to file a bug to get the hadoop documentation fixed from DefaultResourseCalculator to DefaultResourceCalculator.

@vanzin
Copy link
Contributor

vanzin commented May 2, 2016

why we can't report the correct vCores

@jdesmet Spark is not reporting anything, and that's the part you are confused about. YARN does all its accounting correctly. If Spark were able to influence YARN's accounting, that would be a huge bug in YARN.

@jdesmet
Copy link

jdesmet commented May 2, 2016

However, memory reported in yarn ui on the containers seems to largely match with what I declared to use for the spark executors. Also capacity scheduler does have the option to use a resource calculator capable of accounting for cpu utilization. That makes me to (wrongly?) assume that capacity scheduler can take into account (measured?) memory and CPU utilization.

Sent from my iPhone

On May 2, 2016, at 10:39 AM, Marcelo Vanzin notifications@github.com wrote:

why we can't report the correct vCores

@jdesmet Spark is not reporting anything, and that's the part you are confused about. YARN does all its accounting correctly. If Spark were able to influence YARN's accounting, that would be a huge bug in YARN.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@jerryshao
Copy link
Contributor Author

@jdesmet , by default if cpu scheduling is not enabled in yarn, what you saw on yarn's web UI about vcore usage (1 per container) is actually meaningless, I think that makes you confuse because what you specified is a different number, but there in yarn it only shows 1 core.

This is only a yarn ui issue that is quite misleading if cpu scheduling is not enabled, internally in yarn's scheduling all the resource accounting is correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants