Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conflict between default kubernetes rollout #108

Closed
messiahUA opened this issue Jul 16, 2021 · 6 comments
Closed

Conflict between default kubernetes rollout #108

messiahUA opened this issue Jul 16, 2021 · 6 comments
Labels

Comments

@messiahUA
Copy link

messiahUA commented Jul 16, 2021

I've tried wave for the first time and looks like there is a possible conflict between default kubernetes rollout when you modify configmap/secret reference. So basically k8s starts its rollout and then wave makes a change and starts another rollout.

I don't think it is critical because I can't come up with a case where it is, but maybe this can cause some unforeseen race conditions. And as far as I understand the logic of wave there is no way around such situations because it needs to update the annotation in any case?

Also, want to add that in my particular case I need to monitor only one secret because configmap updates are managed by another process, so the ideal way is to configure wave to watch only specified resource (secret) and unfortunately there is no such a feature...

@wonderhoss
Copy link
Collaborator

Are you referring to a situation where both a ConfigMap are a Deployment are updated roughly at the same time?
In those situations it is possible that, depending on ordering, a new Pod will start, still using the old configuration, then immediately gets replaced by one using the new one.

Without Wave this could still be a problem, causing some Pods to run with different configuration from others. Wave merely ensures in these cases that the state everything settles on eventually is the desired one.

Everything in Kubernetes is eventually consistent and so such situations should be expected. Applications should make use of PodDisruptionBudgets and empoy techniques that allow code to function with both old and new versions of a configuration set in order to avoid being negatively impacted.

@messiahUA
Copy link
Author

messiahUA commented Jul 19, 2021

@gargath I'm referring to the situation when configmap reference changes in the deployment. For example you have a deployment which references configmap1 and you change it to configmap2, k8s starts its rollout which is then superseded by wave at the same time. As you mentioned I think this is not a problem and expected by k8s, but just a thought. Maybe ideally it should be handled in the webhook which injects new checksum on edit to prevent double rollout, but probably it's not worth it.

And also I don't want to create another issue for the question about if there are any plans to implement a possibility to monitor only specified configmap(s)/secret(s)?

@github-actions
Copy link

This issue has been inactive for 60 days. If the issue is still relevant please comment to re-activate the issue. If no action is taken within 7 days, the issue will be marked closed.

@github-actions github-actions bot added the Stale label Sep 18, 2021
@toelke toelke reopened this Feb 12, 2024
@jabdoa2
Copy link
Contributor

jabdoa2 commented Apr 28, 2024

We should add an optional mutating webhook to implement this. It would prevent restarts in all cases where configmaps/secrets exist prior to creation of the deployment. In my opinion that would be worth implementing.

Unfortunately, this will not solve the case for most helm charts where you install configmap + deployment at the same time. Especially when you add other controller (such as cert-manager) into the mix. In those cases the pod might actually be pending until those secrets exist and start with the first version but wave will restart them right away.

@jabdoa2
Copy link
Contributor

jabdoa2 commented Apr 30, 2024

Should be fixed by #155

@jabdoa2
Copy link
Contributor

jabdoa2 commented May 3, 2024

@toelke this one can be closed. I guess I should have commented in the PR instead of here.

@toelke toelke closed this as completed May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants