diff --git a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala index 85409d599ccaa..1570f869c5bd0 100644 --- a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala +++ b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala @@ -47,8 +47,8 @@ import org.apache.spark.util.{Clock, SystemClock, ThreadUtils, Utils} * executors that could run all current running and pending tasks at once. * * Increasing the target number of executors happens in response to backlogged tasks waiting to be - * scheduled. If the scheduler queue is not drained in N seconds, then new executors are added. If - * the queue persists for another M seconds, then more executors are added and so on. The number + * scheduled. If the scheduler queue is not drained in M seconds, then new executors are added. If + * the queue persists for another N seconds, then more executors are added and so on. The number * added in each round increases exponentially from the previous round until an upper bound has been * reached. The upper bound is based both on a configured property and on the current number of * running and pending tasks, as described above.