Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ private[spark] class YarnClientSchedulerBackend(
*/
private def asyncMonitorApplication(): Unit = {
assert(client != null && appId != null, "Application has not been submitted yet!")
val interval = conf.getLong("spark.yarn.client.progress.pollinterval", 1000)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would change this to something like spark.yarn.client.amPollInterval.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, but as it is the client to get the application report from the RM, so maybe "spark.yarn.client.progress.pollinterval" is better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is that we don't really have a notion of "progress" in Spark on YARN. In MapReduce, progress traditionally referred to how far along the app was, but Spark doesn't provide a similar notion. YARN itself does have a notion of app progress, which is not related to what's here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you are right. In PR #5305 I use the client.monitorApplication, then we can use "spark.yarn.report.interval" to changing the yarn client monitor interval. Thank you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, is this superseded by #5305 ? it sounds like you have two different changes related to SPARK-3596 and I'm wondering whether they are mutually exclusive, or one or both is preferred, or what.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srowen Yes, #5305 can solve this issue, may be we can close this PR first.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up to your judgment; you're able to open and close PRs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

val t = new Thread {
override def run() {
while (!stopping) {
Expand All @@ -143,7 +144,7 @@ private[spark] class YarnClientSchedulerBackend(
sc.stop()
stopping = true
}
Thread.sleep(1000L)
Thread.sleep(interval)
}
Thread.currentThread().interrupt()
}
Expand Down