-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backoff policy and failed pod limit #583
Backoff policy and failed pod limit #583
Conversation
contributors/design-proposals/job.md
Outdated
limits, described above, are set. | ||
|
||
All of the above fields will be optional and will apply no matter which `restartPolicy` | ||
is set on a `PodTemplate`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was struggling with this idea, but I think we have necessary information when failure happens in both cases. When Never
we count failures ourselves, with OnFailure
we can rely on no of restarts. Alternatively, I was thinking to address that only for Never
, but I want to hear from others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For our use case it would be useful to rely on OnFailure
policy and then counting number of restarts (this is in fact what we are doing from outside, watching status over stream connections). Since we aim to submit large amounts of jobs
, it would be better to have only one container retrying rather that having a new pod
per failure spawned by the job controller
. Unless you think this is not going to end up creating performance issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I interpret the Job restart limit to be predicated on the number of pod failures. If a pod has a restart policy of OnFailure
, by my reading of the documentation, a pod in which its sole container terminates in failure means the whole pod transitions to the "Failed" state, perhaps only momentarily, even if the container gets restarted and the pod transitions back to the "Running" state. However, the "Example states" section at the end of the same document never mentions such a transition.
That means that to count the number of attempts for a pod template with restart policy of Never
, we'd have to count the number of pods created for the Job. For a policy of OnFailure
, that count is the number of pods created (assuming it's possible for a Job to lose a pod and have to schedule a replacement) plus the number of container restarts. (Each pod creation implies each container starting once.)
However, for a pod with multiple containers, we're then treating each failure of a container to imply one failure of the containing pod. That's confusing to reason about. The alternative is to say that a pod with a restart policy of OnFailure
only participates in this accounting when the entire pod fails, such as due to the disk failing or the hosting node is removed from the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I went with both actually is that I don't want to connect pod restart policy with job backoff policy, in the first place. Additionally, if pod's restart policy is never the kubelet will keep on creating new pods along the old ones (that's the job contract), which in turns the FailedPodsLimit
can address.
Currently Never
is the only one that returns the pod failure back to job controller which creates a new pod to satisfy completions. With OnFailure
we're passing that responsibility to kubelet. After talking to @Kargakis I'm starting to think we should've restricted jobs to only use Never
, but it's too late now because it would break to many folks. That's why I'd go with the backoff policy not being connected with pod's restart policy to limit the amount of confusion we already caused allowing both restart policies. Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does make sense to me. It's important that users understand, though, that with OnFailure
a container failure counts against the Job retry accounting.
If a Job uses pods with three containers, and two of the containers fail "at the same time" and get restarted by the kubelet, does that count as two retries of the Job?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add that explanation, in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All this makes perfect sense to me. Would you like me to start demonstrating this in my PR #41451?
contributors/design-proposals/job.md
Outdated
// Optional number of retries, before marking this job failed. | ||
BackoffLimit *int | ||
|
||
// Optional time (in seconds), how log a job should be retried before marking it failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
long*
contributors/design-proposals/job.md
Outdated
By design, Jobs do not have any notion of failure, other than Pod's `restartPolicy` | ||
which is mistakenly taken as Job's restart policy ([#30243](https://github.com/kubernetes/kubernetes/issues/30243), | ||
[#[43964](https://github.com/kubernetes/kubernetes/issues/43964)]). There are | ||
situation where one wants to fail a Job after some amount of retries over certain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Did you mean "over a certain period" or "over multiple periods?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a certain period, will fix.
contributors/design-proposals/job.md
Outdated
[#[43964](https://github.com/kubernetes/kubernetes/issues/43964)]). There are | ||
situation where one wants to fail a Job after some amount of retries over certain | ||
period of time, due to a logical error in configuration etc. To do so we are going | ||
following fields will be introduced, which will control the exponential backoff |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Is there a sentence fragment missing in between "we are going" and "following fields?"
Maybe, "we are going to introduce the following fields?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup ;)
contributors/design-proposals/job.md
Outdated
situation where one wants to fail a Job after some amount of retries over certain | ||
period of time, due to a logical error in configuration etc. To do so we are going | ||
following fields will be introduced, which will control the exponential backoff | ||
when retrying Job: number of retries and time to retry. The two fields will allow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/retrying Job/retrying a Job/
or maybe s/retying Job/retrying Jobs/
contributors/design-proposals/job.md
Outdated
period of time, due to a logical error in configuration etc. To do so we are going | ||
following fields will be introduced, which will control the exponential backoff | ||
when retrying Job: number of retries and time to retry. The two fields will allow | ||
creating a fine grain control over the backoff policy, limiting the number of retries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/a fine grain/fine-grained/
contributors/design-proposals/job.md
Outdated
following fields will be introduced, which will control the exponential backoff | ||
when retrying Job: number of retries and time to retry. The two fields will allow | ||
creating a fine grain control over the backoff policy, limiting the number of retries | ||
over specified period of time. In the case when only one of them is specified |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/over specified/over a specified/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/is specified/is specified,/
contributors/design-proposals/job.md
Outdated
// before the system tries to terminate it; value must be positive integer | ||
ActiveDeadlineSeconds *int | ||
|
||
// Optional number of retries, before marking this job failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s /retries,/retries/
contributors/design-proposals/job.md
Outdated
// Optional number of retries, before marking this job failed. | ||
BackoffLimit *int | ||
|
||
// Optional time (in seconds), how log a job should be retried before marking it failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/), how/) specifying how/
- Did you consider using type
*uint
instead?
That could encourage unintentional overflow when used withtime
.
contributors/design-proposals/job.md
Outdated
@@ -26,6 +27,31 @@ Jobs are needed for executing multi-pod computation to completion; a good exampl | |||
here would be the ability to implement any type of batch oriented tasks. | |||
|
|||
|
|||
## Backoff policy and failed pod limit | |||
|
|||
By design, Jobs do not have any notion of failure, other than Pod's `restartPolicy` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/than Pod's/than a pod's/
contributors/design-proposals/job.md
Outdated
@@ -18,6 +18,7 @@ Several existing issues and PRs were already created regarding that particular s | |||
1. Be able to get the job status. | |||
1. Be able to specify the number of instances performing a job at any one time. | |||
1. Be able to specify the number of successfully finished instances required to finish a job. | |||
1. Be able to specify backoff policy, when job is continuously failing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- s/specify backoff/specify a backoff/
contributors/design-proposals/job.md
Outdated
limits, described above, are set. | ||
|
||
All of the above fields will be optional and will apply no matter which `restartPolicy` | ||
is set on a `PodTemplate`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I interpret the Job restart limit to be predicated on the number of pod failures. If a pod has a restart policy of OnFailure
, by my reading of the documentation, a pod in which its sole container terminates in failure means the whole pod transitions to the "Failed" state, perhaps only momentarily, even if the container gets restarted and the pod transitions back to the "Running" state. However, the "Example states" section at the end of the same document never mentions such a transition.
That means that to count the number of attempts for a pod template with restart policy of Never
, we'd have to count the number of pods created for the Job. For a policy of OnFailure
, that count is the number of pods created (assuming it's possible for a Job to lose a pod and have to schedule a replacement) plus the number of container restarts. (Each pod creation implies each container starting once.)
However, for a pod with multiple containers, we're then treating each failure of a container to imply one failure of the containing pod. That's confusing to reason about. The alternative is to say that a pod with a restart policy of OnFailure
only participates in this accounting when the entire pod fails, such as due to the disk failing or the hosting node is removed from the cluster.
@nickschuch let's wait for this to get to a stable shape (more or less) and then you can go ahead an open a fresh one for it. I'll close the old one. |
I've addressed first batch of comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cited one nit that we could fix and asked for clarification, but I won't hold up a merge for them.
contributors/design-proposals/job.md
Outdated
is set on a `PodTemplate`. The only difference applies to how failures are counted. | ||
For restart policy `Never` we count actual pod failures (reflected in `.status.failed` | ||
field). With restart policy `OnFailure` we look at pod restarts (calculated from | ||
`.status.containerStatuses[*].restartCount`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
contributors/design-proposals/job.md
Outdated
too many failed pods left around (as mentioned in [#30243](https://github.com/kubernetes/kubernetes/issues/30243)) | ||
to introduce following fields will be introduced, which will control the exponential | ||
backoff when retrying a Job: number of retries and time to retry. The two fields | ||
will allow creating a fine-grained control over the backoff policy, limiting the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Reading this again, I think this would read better as
will allow fine-grained control over
contributors/design-proposals/job.md
Outdated
BackoffLimit *int | ||
|
||
// Optional time (in seconds), how log a job should be retried before marking it failed. | ||
// Optional time (in seconds) specifying how long a job should be retried before marking it failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Obviously this duration shouldn't be negative (in which case I assume we'd ignore it, or reject it), but what about zero?
Since it's a pointer, the pointer being nil means that it's not set, but what if it's present but zero? Is that the same as it not being set, or is that a valid duration meaning that we won't wait any amount of time for a job complete (that it must complete immediately)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We usually require those being >= 0, it'll be checked in validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice approach, I think those changes are going to be really useful!
which is mistakenly taken as Job's restart policy ([#30243](https://github.com/kubernetes/kubernetes/issues/30243), | ||
[#[43964](https://github.com/kubernetes/kubernetes/issues/43964)]). There are | ||
situation where one wants to fail a Job after some amount of retries over a certain | ||
period of time, due to a logical error in configuration etc. To do so we are going |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think you are repeating "introduce{,d}". It should be something like: "To do so we are going to introduce the following fields"
contributors/design-proposals/job.md
Outdated
@@ -79,8 +109,21 @@ type JobSpec struct { | |||
// job should be run with. Defaults to 1. | |||
Completions *int | |||
|
|||
// Optional duration in seconds relative to the startTime that the job may be active | |||
// before the system tries to terminate it; value must be a positive integer. | |||
ActiveDeadlineSeconds *int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Can
uint
be used, also in the other fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used ints, for consistency and then do appropriate validation so this is non-negative and bigger than 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*int32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*int32
Actually *int64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also update the rest of the fields to their canonical types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the deadline for the entire job to complete, regardless of whether there are restarts, right? Document that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It clearly states it's relative to job's startTime.
contributors/design-proposals/job.md
Outdated
@@ -79,8 +109,21 @@ type JobSpec struct { | |||
// job should be run with. Defaults to 1. | |||
Completions *int | |||
|
|||
// Optional duration in seconds relative to the startTime that the job may be active |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think the comments for exported fields should start with the name of the field: "ActiveDeadlineSeconds is an optional..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not anymore, the def will have those in place.
contributors/design-proposals/job.md
Outdated
BackoffLimit *int | ||
|
||
// Optional time (in seconds) specifying how long a job should be retried before marking it failed. | ||
BackoffDeadlineSeconds *int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ProgressDeadlineSeconds to match the field in Deployments? Seems like it's the same concept although in Deployment we merely add a condition in the DeploymentStatus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would a user typically use ActiveDeadlineSeconds with BackoffDeadlineSeconds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kargakis this is different than ProgressDeadlineSecods
, in deployments world PDS is how long you are waiting for the job to make progress and than fail. With Job BackoffDeadlineSeconds
applies at any point in time. This means that if a Job expects to reach 5 completions, 4 of them can succeed and only the 5th Pod might be having troubles and BDS should still apply. That's why I'd rather not to use the same name, it's intentional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@erictune @sdminonne ADS is the overall time for a Job (max life-time). Whereas BDS is counted from the moment the first failure happens. But yes, usually BDS will be smaller than ADS, otherwise you'll never hit BDS in case of a failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, PDS is not that different. We start counting from the last time we saw progress and once the deadline is exceeded we merely switch the Progressing condition to False. "Failing the deployment" should be something that a higher-level orchestrator decides based on the condition.
contributors/design-proposals/job.md
Outdated
BackoffDeadlineSeconds *int | ||
|
||
// Optional number of failed pods to retain. | ||
FailedPodsLimit *int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When are pods left around? I guess when pods have restartPolicy=Never? It feels weird to add a feature that is not related directly to the failurePolicy and also is useful for a specific case in Jobs. Can we leave it out of this proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for debugging purposes. Leaving pods should allow you to figure out what went wrong. Without it and with restart policy Never you might end up with many failed pods, if you set the limits high. That was one of the original requests wrt to the backoff policy, so I think it's crucial to address it here.
contributors/design-proposals/job.md
Outdated
to introduce following fields, which will control the exponential backoff when | ||
retrying a Job: number of retries and time to retry. The two fields will allow | ||
fine-grained control over the backoff policy, limiting the number of retries over | ||
a specified period of time. If only one of the two fields is supplied, an exponential |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear what happens if both fields are specified. Is only the number of retries limited or both of the fields limit each other?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, it's clearer after rereading the bullet points below.
contributors/design-proposals/job.md
Outdated
retrying a Job: number of retries and time to retry. The two fields will allow | ||
fine-grained control over the backoff policy, limiting the number of retries over | ||
a specified period of time. If only one of the two fields is supplied, an exponential | ||
backoff with an intervening duration of ten seconds and a factor of two will be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should promise our users exponential backoff -- we should give ourselves the freedom to do other things in the future. Just say that we will retry at a time that the system choses, and that in any case the number of retries won't exceed the number specified by the user.
contributors/design-proposals/job.md
Outdated
too many failed pods left around (as mentioned in [#30243](https://github.com/kubernetes/kubernetes/issues/30243)), | ||
we are going to introduce a field which will allow specifying the maximum number | ||
of failed pods to keep around. This number will also take effect if none of the | ||
limits described above are set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the intent that users will typically set this to 0 unless they expect to need to debug? Or do we typically want to keep around at least 1 failed pod so the user can get the logs?
not save all pods without
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I vote to have this defaulted to 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 seems reasonable default.
contributors/design-proposals/job.md
Outdated
is set on a `PodTemplate`. The only difference applies to how failures are counted. | ||
For restart policy `Never` we count actual pod failures (reflected in `.status.failed` | ||
field). With restart policy `OnFailure` we look at pod restarts (calculated from | ||
`.status.containerStatuses[*].restartCount`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cannot be synchronous, though, so the limit has to be approximate. Document that it is precise with Never, and approximate with OnFailure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, makes sense.
contributors/design-proposals/job.md
Outdated
@@ -79,8 +109,21 @@ type JobSpec struct { | |||
// job should be run with. Defaults to 1. | |||
Completions *int | |||
|
|||
// Optional duration in seconds relative to the startTime that the job may be active | |||
// before the system tries to terminate it; value must be a positive integer. | |||
ActiveDeadlineSeconds *int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the deadline for the entire job to complete, regardless of whether there are restarts, right? Document that.
contributors/design-proposals/job.md
Outdated
ActiveDeadlineSeconds *int | ||
|
||
// Optional number of retries before marking this job failed. | ||
BackoffLimit *int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why BackoffLimit instead of RetryLimit or TriesLimit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went with Backoff as a prefix for every field related to that, since we're talking about Backoff policy.
contributors/design-proposals/job.md
Outdated
BackoffLimit *int | ||
|
||
// Optional time (in seconds) specifying how long a job should be retried before marking it failed. | ||
BackoffDeadlineSeconds *int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would a user typically use ActiveDeadlineSeconds with BackoffDeadlineSeconds?
contributors/design-proposals/job.md
Outdated
[#[43964](https://github.com/kubernetes/kubernetes/issues/43964)]). There are | ||
situation where one wants to fail a Job after some amount of retries over a certain | ||
period of time, due to a logical error in configuration etc. To do so we are going | ||
to introduce following fields will be introduced, which will control the exponential |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will be introduced
@erictune @Kargakis I've addressed your comments, ptal once more. @sdminonne mind pointing your guy who will be implementing this to have a look at it, as well. |
@soltysh Cedric's GH account is @clamoriniere1A |
@clamoriniere1A mind have a look at this and eventually comment? |
@sdminonne sure, I have started to look at it. |
I started to work on the Job Backoff policy and failed pod limit: kubernetes/kubernetes#48075 |
Automatic merge from submit-queue |
Fix: kubernetes/community#583 Job failure policy integration in JobController. From the JobSpec.BackoffLimit and JobSpec.BackoffSeconds the JobController will define the backoff duration between Job retry. Currently the number of retry for each job is store in the DynamicBackoff struct that is local to the JobController. It means that if the JobController restarts the number of retries will be lost and the backoff duration will be reset to 0.
Fix: kubernetes/community#583 Job failure policy integration in JobController. From the JobSpec.BackoffLimit and JobSpec.BackoffSeconds the JobController will define the backoff duration between Job retry. Currently the number of retry for each job is store in the DynamicBackoff struct that is local to the JobController. It means that if the JobController restarts the number of retries will be lost and the backoff duration will be reset to 0.
For the "Backoff policy and failed pod limit" implementation (kubernetes/community#583), we need to retrieve the number of backoff retry and also be able to set a different initial backoff duration for each entry. This Pull request adds two new methods to ```Backoff```: - ```NextWithInitDuration```: as ```Next``` the purpose of this method is to increment the backoff delay exponentially. In addition, you need to provide the initial delay duration if the entry is not already present. - ```GetWithRetryNumber```: returns the backoff delay associated to an entry and also the retry number corresponding to the number of time that the ```Next|GetWithRetryNumber``` has been call.
For the "Backoff policy and failed pod limit" implementation (kubernetes/community#583), we need to retrieve the number of backoff retry and also be able to set a different initial backoff duration for each entry. This Pull request adds two new methods to ```Backoff```: - ```NextWithInitDuration```: as ```Next``` the purpose of this method is to increment the backoff delay exponentially. In addition, you need to provide the initial delay duration if the entry is not already present. - ```GetWithRetryNumber```: returns the backoff delay associated to an entry and also the retry number corresponding to the number of time that the ```Next|GetWithRetryNumber``` has been call.
Fix: kubernetes/community#583 Job failure policy integration in JobController. From the JobSpec.BackoffLimit and JobSpec.BackoffSeconds the JobController will define the backoff duration between Job retry. Currently the number of retry for each job is store in the DynamicBackoff struct that is local to the JobController. It means that if the JobController restarts the number of retries will be lost and the backoff duration will be reset to 0.
Add new fields in api v1.JobSpec object for backoff policy - BackoffLimit - FailedPodsLimit fixes: kubernetes/community#583
Add new fields in api v1.JobSpec object for backoff policy - BackoffLimit - FailedPodsLimit fixes: kubernetes/community#583
Automatic merge from submit-queue (batch tested with PRs 51335, 51364, 51130, 48075, 50920) [API] Feature/job failure policy **What this PR does / why we need it**: Implements the Backoff policy and failed pod limit defined in kubernetes/community#583 **Which issue this PR fixes**: fixes #27997, fixes #30243 **Special notes for your reviewer**: This is a WIP PR, I updated the api batchv1.JobSpec in order to prepare the backoff policy implementation in the JobController. **Release note**: ```release-note Add backoff policy and failed pod limit for a job ```
…icy_controller Automatic merge from submit-queue Job failure policy controller support **What this PR does / why we need it**: Start implementing the support of the "Backoff policy and failed pod limit" in the ```JobController``` defined in kubernetes/community#583. This PR depends on a previous PR #48075 that updates the K8s API types. TODO: * [X] Implement ```JobSpec.BackoffLimit``` support * [x] Rebase when #48075 has been merged. * [X] Implement end2end tests implements kubernetes/community#583 **Special notes for your reviewer**: **Release note**: ```release-note Add backoff policy and failed pod limit for a job ```
This recent development looks good, but If I may ask, what is the current best config/pattern for a try-once job in 1.7.3? TIA |
@paulwalker there's nothing unfortunately. You'd need to implement similar mechanism inside of your job manually, I'm afraid 😞 |
Or run a pod with RestartPolicy=Never. |
@erictune can you elaborate on why is that a |
With these changes, a job with "RestartPolicy=Never" and BackoffLimit=0
with run once under almost all circumstances.
However, the communication between the job controller, apiserver, kubelet
and docker is not atomic. So, it is possible that under some very unlikely
combination of crashes/restarts
in the kubelet, docker and/or job controller, that the program in the Job's
pod template might be started twice.
Basically, I would use it for any task that has built-in idempotence, and
for most types of tasks that don't have built-in idempotence. I would not
use it for say, transferring money, without some additional
application-level check to avoid duplicates.
…On Wed, Sep 20, 2017 at 7:24 AM, Chirag ***@***.***> wrote:
@erictune <https://github.com/erictune> can you elaborate on Job
terminated after (approximately) first Pod Failure -- must use with Never
to get close to Run-Once behavior?
why is that a close to Run-once behavior?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#583 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHuudr9LQEwfmHXGZAexkK4AKRbwQybpks5skSAzgaJpZM4NIpmw>
.
|
@erictune thanks for the explanation and that makes perfect sense. 👍 |
This will result in the following retry sequence: 10s, 20s, 40s, 1m20s, 2m40s, | ||
5m20s. After which the job will be considered failed. | ||
|
||
Additionally, to help debug the issue with a Job, and limit the impact of having |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not also limit the number of successful pods which may takes a lot of resource either ?
Automatic merge from submit-queue Backoff policy and failed pod limit This update addresses problems raised in kubernetes/kubernetes#30243 and kubernetes/kubernetes#43964. @erictune I've mentioned this to you during last sig-apps, ptal @kubernetes/sig-apps-feature-requests ptal @lukasheinrich @nickschuch @Yancey1989 @sstarcher @maximz fyi since you all were interested in this
…netes#583) * Update WORKING-GROUPS.md * Update WORKING-GROUPS.md * Update WORKING-GROUPS.md
This update addresses problems raised in kubernetes/kubernetes#30243 and kubernetes/kubernetes#43964.
@erictune I've mentioned this to you during last sig-apps, ptal
@kubernetes/sig-apps-feature-requests ptal
@lukasheinrich @nickschuch @Yancey1989 @sstarcher @maximz fyi since you all were interested in this