Skip to content

Conversation

@jerryshao
Copy link
Contributor

What changes were proposed in this pull request?

Currently if spark.dynamicAllocation.initialExecutors is less than spark.dynamicAllocation.minExecutors, Spark will automatically pick the minExecutors without any warning. While in 1.6 Spark will throw exception if configured like this. So here propose to add warning log if these parameters are configured invalidly.

How was this patch tested?

Unit test added to verify the scenario.

@SparkQA
Copy link

SparkQA commented Jul 12, 2016

Test build #62153 has finished for PR 14149 at commit f08eabd.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@jerryshao
Copy link
Contributor Author

@tgravescs and @rdblue , please help to review, thanks a lot.

if (conf.get(DYN_ALLOCATION_INITIAL_EXECUTORS) < conf.get(DYN_ALLOCATION_MIN_EXECUTORS)) {
logWarning(s"${DYN_ALLOCATION_INITIAL_EXECUTORS.key} less than " +
s"${DYN_ALLOCATION_MIN_EXECUTORS.key} is invalid, will use " +
s"${DYN_ALLOCATION_MIN_EXECUTORS} instead.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this doesn't cover is the user specified the executor instances < the minimum seems like we should also put out a warning for that also.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also note that the comment above isn't exactly correct because if the number of executor instances is > then the min it would use that number instead of the min executors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't consider user specified executor instances, I'm not sure if we should also take this configuration into consideration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

executor instances is what gets set if user puts in --num-executors which I see easily happening now and if min is already set and > then they are different and we will use min so we should warn here also.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that either spark.executor.instances or initialExecutors less than minimum should produce a warning.

@SparkQA
Copy link

SparkQA commented Jul 13, 2016

Test build #62204 has finished for PR 14149 at commit e69a026.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

if (conf.get(DYN_ALLOCATION_INITIAL_EXECUTORS) < conf.get(DYN_ALLOCATION_MIN_EXECUTORS)) {
logWarning(s"${DYN_ALLOCATION_INITIAL_EXECUTORS.key} less than " +
s"${DYN_ALLOCATION_MIN_EXECUTORS.key} is invalid, will use " +
s"${DYN_ALLOCATION_MIN_EXECUTORS.key} to override.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this isn't necessarily true, it could use num executor instances if that is higher then min. I think we should say using the max of other 2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think my original meaning is DYN_ALLOCATION_MIN_EXECUTORS will override DYN_ALLOCATION_INITIAL_EXECUTORS if the former is larger. Not related to num executor instance, num executor instance can still be larger than DYN_ALLOCATION_MIN_EXECUTORS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right but as a user I could interpret that to mean its going to use the min executors for the initial number, which isn't necessarily true. I'm just worried it will confuse the users. How about we change the phrasing to be something like: initial less than min is invalid, ignoring its setting, please update your configs. Similar message for the num executor instances below .

Then after the max calculation we could simply print an info message like: using initial executors = value, max of initial, num executors, min executors.

These warnings are turning into a lot of extra code/text.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds good to me, let me update the patch.

@SparkQA
Copy link

SparkQA commented Jul 13, 2016

Test build #62246 has finished for PR 14149 at commit 6dd2a86.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@tgravescs
Copy link
Contributor

+1

asfgit pushed a commit that referenced this pull request Jul 13, 2016
…s than minExecutors

## What changes were proposed in this pull request?

Currently if `spark.dynamicAllocation.initialExecutors` is less than `spark.dynamicAllocation.minExecutors`, Spark will automatically pick the minExecutors without any warning. While in 1.6 Spark will throw exception if configured like this. So here propose to add warning log if these parameters are configured invalidly.

## How was this patch tested?

Unit test added to verify the scenario.

Author: jerryshao <sshao@hortonworks.com>

Closes #14149 from jerryshao/SPARK-16435.

(cherry picked from commit d8220c1)
Signed-off-by: Tom Graves <tgraves@yahoo-inc.com>
@asfgit asfgit closed this in d8220c1 Jul 13, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants