Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.internal.Logging
import org.apache.spark.internal.config.{PYSPARK_DRIVER_PYTHON, PYSPARK_PYTHON}
import org.apache.spark.internal.config.ConfigBuilder
import org.apache.spark.network.util.ByteUnit

private[spark] object Config extends Logging {

Expand Down Expand Up @@ -675,6 +676,42 @@ private[spark] object Config extends Logging {
.checkValue(value => value > 0, "Maximum number of pending pods should be a positive integer")
.createWithDefault(Int.MaxValue)

val KUBERNETES_JOB_QUEUE = ConfigBuilder("spark.kubernetes.job.queue")
.doc("The name of the queue to which the job is submitted. This info " +
"will be stored in configuration and passed to specified feature step.")
.version("3.3.0")
.stringConf
.createWithDefault("default")

val KUBERNETES_JOB_MIN_CPU = ConfigBuilder("spark.kubernetes.job.minCPU")
.doc("The minimum CPU for running the job. This info " +
"will be stored in configuration and passed to specified feature step.")
.version("3.3.0")
.doubleConf
.createWithDefault(2.0)

val KUBERNETES_JOB_MIN_MEMORY = ConfigBuilder("spark.kubernetes.job.minMemory")
.doc("The minimum memory for running the job, in MiB unless otherwise specified. This info " +
"will be stored in configuration and passed to specified feature step.")
.version("3.3.0")
.bytesConf(ByteUnit.MiB)
.createWithDefaultString("3g")

val KUBERNETES_JOB_MIN_MEMBER = ConfigBuilder("spark.kubernetes.job.minMember")
.doc("The minimum number of pods running in a job. This info " +
"will be stored in configuration and passed to specified feature step.")
.version("3.3.0")
.intConf
.checkValue(value => value > 0, "The minimum number should be a positive integer")
.createWithDefault(1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this mean that the driver and the executor(s) will be on the same pod ?

Copy link
Member Author

@Yikun Yikun Feb 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it's for driver, in the case of spot instance, we allow user only create driver pod first. So, it's equivalent to no limit by default.


val KUBERNETES_JOB_PRIORITY_CLASS_NAME = ConfigBuilder("spark.kubernetes.job.priorityClassName")
.doc("The priority of the running job. This info " +
"will be stored in configuration and passed to specified feature step.")
.version("3.3.0")
.stringConf
.createOptional

val KUBERNETES_DRIVER_LABEL_PREFIX = "spark.kubernetes.driver.label."
val KUBERNETES_DRIVER_ANNOTATION_PREFIX = "spark.kubernetes.driver.annotation."
val KUBERNETES_DRIVER_SERVICE_ANNOTATION_PREFIX = "spark.kubernetes.driver.service.annotation."
Expand Down