Skip to content

Conversation

@Sephiroth-Lin
Copy link
Contributor

On PR #5305 we interrupt the monitor thread but forget to catch the InterruptedException, then in the log will print the stack info, so we need to catch it.

@SparkQA
Copy link

SparkQA commented Apr 12, 2015

Test build #30112 has finished for PR 5479 at commit 0e2ef1f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
    • case class With(child: LogicalPlan, cteRelations: Map[String, Subquery]) extends UnaryNode
  • This patch does not change any dependencies.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate of #5451 so it would have been better to collaborate on that rather than open a new PR. However, this is closer to the right fix, so maybe we can converge on this PR.

Why is Thread.currentThread().interrupt() called here? I thought that would only be done to preserve the interrupt state, but then that should only happen in the catch block right? The thread isn't waited-on by anything else and is terminating otherwise in the non-interrupted code path.

Also, it's correct to not stop the SparkContext in this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we don't need to call Thread.currentThread().interrupt() here, but I think we need to stop the SparkContext. If user kill the app on Yarn, then we need to stop the SparkContext right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it seems reasonably to stop the context. Should that happen in a finally block so that it happens even if interrupted, or, is the interrupted case one where we assume it's already shutting down?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We interrupt the monitor thread when we call stop(), so don't need to call sc.stop() again. We add sc.stop() after client.monitorApplication return just to confirm we can stop SparkContext when app has finished/failed/killed before we stop SparkContext.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK so the net change here is to

  • log the interruption at info, instead of throwing the exception. Either way, the thread finishes immediately after.
  • remove the spurious interrupt() call here

LGTM

@SparkQA
Copy link

SparkQA commented Apr 13, 2015

Test build #30129 has finished for PR 5479 at commit f775f93.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.
  • This patch does not change any dependencies.

@asfgit asfgit closed this in 202ebf0 Apr 13, 2015
@Sephiroth-Lin Sephiroth-Lin deleted the SPARK-6870 branch May 15, 2016 10:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants