Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -422,16 +422,19 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, val rpcEnv: Rp
logWarning(s"Executor to kill $id does not exist!")
}

// If an executor is already pending to be removed, do not kill it again (SPARK-9795)
val executorsToKill = knownExecutors.filter { id => !executorsPendingToRemove.contains(id) }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: It looks like unknownExecutors is useful only for that one log statement. If we don't need that log line, we could reduce the number of set copies and traversals. How about something like this (this is less scala-like, but reduces the number of traversals and copies):

val knownExecutors = new HashSet[String]
executorsIds.foreach { id =>
  if (executorDataMap.contains(id)) {
    knownExecutors += id
  }
}

This also makes the other changes in this file unnecessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks fine, but in this patch I wanted to limit the scope of the changes so I'm going to leave this as is.

executorsPendingToRemove ++= executorsToKill

// If we do not wish to replace the executors we kill, sync the target number of executors
// with the cluster manager to avoid allocating new ones. When computing the new target,
// take into account executors that are pending to be added or removed.
if (!replace) {
doRequestTotalExecutors(numExistingExecutors + numPendingExecutors
- executorsPendingToRemove.size - knownExecutors.size)
doRequestTotalExecutors(
numExistingExecutors + numPendingExecutors - executorsPendingToRemove.size)
}

executorsPendingToRemove ++= knownExecutors
doKillExecutors(knownExecutors)
doKillExecutors(executorsToKill)
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -283,6 +283,26 @@ class StandaloneDynamicAllocationSuite
assert(master.apps.head.getExecutorLimit === 1000)
}

test("kill the same executor twice (SPARK-9795)") {
sc = new SparkContext(appConf)
val appId = sc.applicationId
assert(master.apps.size === 1)
assert(master.apps.head.id === appId)
assert(master.apps.head.executors.size === 2)
assert(master.apps.head.getExecutorLimit === Int.MaxValue)
// sync executors between the Master and the driver, needed because
// the driver refuses to kill executors it does not know about
syncExecutors(sc)
// kill the same executor twice
val executors = getExecutorIds(sc)
assert(executors.size === 2)
assert(sc.killExecutor(executors.head))
assert(sc.killExecutor(executors.head))
assert(master.apps.head.executors.size === 1)
// The limit should not be lowered twice
assert(master.apps.head.getExecutorLimit === 1)
}

// ===============================
// | Utility methods for testing |
// ===============================
Expand Down