[SPARK-26441][KUBERNETES] Add kind configuration of driver pod #23382
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Spark running on kubernetes now starts driver pod in Pod kind by default , which has a problem that entire job fails when host machine crashs. In other words , driver pod can not failover in kind of Pod in this situation. So , we add kind configuration of driver pod which supports Pod、Deployment、Job. For example , in streaming jobs , starting driver pod in Deployment kind will ensure the driver service high available even there is host-machine-crash. In batch jobs , there is configurable backoffLimits for retry.
How was this patch tested?
We test in production env. Starting driver in Deployment or Job kind can make driver high available when host machine crashs.