Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify Deployment and ReplicaSet timeout condition #193

Merged
merged 1 commit into from
Oct 26, 2017

Conversation

KnVerey
Copy link
Contributor

@KnVerey KnVerey commented Oct 26, 2017

There were two extra sources of Deployment timeout with the previous conditions:

  • the ReplicaSet hard timeout, which should have matched but didn't
  • the Pod hard timeout, which is higher, and which it doesn't even
    really make sense to fail Deployments on

This was unnecessarily complex.

There were two extra sources of Deployment timeout with the previous conditions:
- the ReplicaSet hard timeout, which should have matched but didn't
- the Pod hard timeout, which is higher, and which it doesn't even
really make sense to fail Deployments on

This was unnecessarily complex.
Copy link
Contributor

@ibawt ibawt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand the failure conditions but :shipit:

Copy link
Member

@dwradcliffe dwradcliffe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this makes sense

@KnVerey KnVerey merged commit 693fae9 into master Oct 26, 2017
@KnVerey KnVerey deleted the fix_deploy_timeout branch October 26, 2017 17:14
@@ -50,10 +50,6 @@ def timeout_message
@pods.map(&:timeout_message).compact.uniq.join("\n")
end

def deploy_timed_out?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The most pragmatic reason is that @pods.all?(&:deploy_timed_out?) will never be true before super anyway, since the Pod timeout is double the RS timeout; this is just making the behaviour more difficult to understand with no real benefit. In fact, the Pod hard timeout was chosen based on the needs of bare pods, which are assumed to be running a single high-value pseudo-inline task, so referencing it from RS doesn't really make sense. Finally, the supported, k8s-native way to time out a deployment whose pods aren't rolling in time is using progressDeadlineSeconds.

This was referenced Oct 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants