-
Notifications
You must be signed in to change notification settings - Fork 29k
[Spark-15155][Mesos] Optionally ignore default role resources #12933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@tnachen I'd like to request you review this change, since you are the author of the original mesos role code. |
|
So from what I can see, what you really want is to only don't select resources from * role right? |
|
ok to test |
|
Test build #57891 has finished for PR 12933 at commit
|
|
@tnachen exactly, I want to keep Spark from grabbing * roles, as in my use case I have a particular spark cluster that I want to isolate from other clusters. |
|
Test build #57905 has finished for PR 12933 at commit
|
|
MiMa test died here: I don't think this is due to my change, but I'm really not familiar with the MiMa test. |
|
I see, I wonder besides the wildcard role if it ever makes sense to have two lists of roles, one set that you are registered with and another set you want to accept resources for, I cannot think of any possible scenario. |
|
I see that as a potential solution, however at some future point mesos will support frameworks which can hold multiple roles (MESOS-1763 😉) so perhaps leaving it in this form will ease that transition (though we'd then need to change the other property to |
|
Yes I am aware of multi roles, but not sure in what situation you would want to register with multiple roles but only use one role's resources? |
|
True. So really the desired behavior is always just to ignore |
|
I'm updating the title of the PR to reflect the change in approach. I think the boolean property will be sufficient. |
|
Alright, I've modified the code to convert from a property which enumerated the accepted roles, to one which will simply ignore the default role when it is set. I also removed one unit test, which was redundant with the test that checks we accepts all roles when |
|
Test build #57989 has finished for PR 12933 at commit
|
|
Rebasing against master. |
|
Test build #57993 has finished for PR 12933 at commit
|
|
@hellertime Can you rebase and submit again to rerun the tests? |
|
@hellertime ping |
|
Hi! Was busy with the day job. I didn't mean to let this slip! Absolutely will rebase and retest. Thanks. Sent from my iPhone
|
|
@hellertime Are you able to rebase? |
d2b7ad4 to
e81ff73
Compare
|
@tnachen rebased with master |
|
Test build #66740 has finished for PR 12933 at commit
|
e81ff73 to
181f207
Compare
|
Fixed scala style error |
|
Test build #66741 has finished for PR 12933 at commit
|
181f207 to
838dc77
Compare
|
Test build #66780 has finished for PR 12933 at commit
|
838dc77 to
bbe908e
Compare
|
Test build #66811 has finished for PR 12933 at commit
|
bbe908e to
81e07c8
Compare
|
Test build #66825 has finished for PR 12933 at commit
|
|
@tnachen I'm not sure what to do about this unit test failure. Running |
|
I just tried running it locally and I'm getting the same error. It seems like with your change that test is simply declining the offer. |
|
You saw the error with Sent from my iPhone
|
Add new boolean property: `spark.mesos.ignoreDefaultRoleResources`. When this property is set Spark will only accept resources from the role passed in the `spark.mesos.role` property. If `spark.mesos.role` has not been set, `spark.mesos.ignoreDefaultRoleResources` has no effect. Additional unit tests added to `MesosSchedulerBackendSuite`, extending the original multi-role test suite.
81e07c8 to
d64b261
Compare
|
Test build #66887 has finished for PR 12933 at commit
|
|
I ran On Thu, Oct 13, 2016 at 3:21 AM, Chris Heller notifications@github.com
|
| case true if role.isDefined => Set(role) | ||
| case _ => Set(Some("*"), role) | ||
| } | ||
| val acceptedRoles = roles.flatten |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should move this log outside of this helper method, as there might be other context in the future that's calling this.
|
@srowen can you help review this? Besides my minor comment overall it looks fine with me. |
srowen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can only usefully comment on style and left a few comments
| getResource(o.resources, "cpus") >= driverCpu && | ||
| getResource(o.resources, "mem") >= driverMem | ||
| val acceptableResources = o.resources.asScala | ||
| .filter((r: Resource) => acceptedResourceRoles(r.getRole)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a style thing, but I think you'd generally just write (r => acceptedResourceRoles(r.getRole))
| .filter((r: Resource) => acceptedResourceRoles(r.getRole)) | ||
| .asJava | ||
| getResource(acceptableResources, "cpus") >= driverCpu && | ||
| getResource(acceptableResources, "mem") >= driverMem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indent this continuation 2 spaces to make it clear it's a continuation
| * @param props Mesos driver description schedulerProperties map | ||
| */ | ||
| protected def getAcceptedResourceRoles(props: Map[String, String]): Set[String] = { | ||
| getAcceptedResourceRoles( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be my taste only but I'm finding this style hard to read.
props.get(...).map(_.toBoolean).getOrElse(false)? It's what I'd expect to see in the spark code base but don't know if it's objectively any better. Still I'd break this out into a val rather than squeeze this into a method arg
| */ | ||
| private def getAcceptedResourceRoles( | ||
| ignoreDefaultRoleResources: Boolean, | ||
| role: Option[String]) = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems more readable:
val roles = if (ignoreDefaultRoleResources && role.isDefined) Set(role) else Set(Some("*"), role)
|
Hi @hellertime, this PR seems inactive for few months after the last review comments above. Would this be better closed if you are not currently able to work on this further maybe? |
## What changes were proposed in this pull request? This PR proposes to close stale PRs. What I mean by "stale" here includes that there are some review comments by reviewers but the author looks inactive without any answer to them more than a month. I left some comments roughly a week ago to ping and the author looks still inactive in these PR below These below includes some PR suggested to be closed and a PR against another branch which seems obviously inappropriate. Given the comments in the last three PRs below, they are probably worth being taken over by anyone who is interested in it. Closes apache#7963 Closes apache#8374 Closes apache#11192 Closes apache#11374 Closes apache#11692 Closes apache#12243 Closes apache#12583 Closes apache#12620 Closes apache#12675 Closes apache#12697 Closes apache#12800 Closes apache#13715 Closes apache#14266 Closes apache#15053 Closes apache#15159 Closes apache#15209 Closes apache#15264 Closes apache#15267 Closes apache#15871 Closes apache#15861 Closes apache#16319 Closes apache#16324 Closes apache#16890 Closes apache#12398 Closes apache#12933 Closes apache#14517 ## How was this patch tested? N/A Author: hyukjinkwon <gurwls223@gmail.com> Closes apache#16937 from HyukjinKwon/stale-prs-close.
Add new boolean property:
spark.mesos.ignoreDefaultRoleResources. When this property is set Spark will only accept resources from the role passed in thespark.mesos.roleproperty. Ifspark.mesos.rolehas not been set,spark.mesos.ignoreDefaultRoleResourceshas no effect.Additional unit tests added to
MesosSchedulerBackendSuite, extending the original multi-role test suite.