Skip to content
This repository has been archived by the owner on May 3, 2022. It is now read-only.

Support a no-op roll out #89

Open
parhamdoustdar opened this issue Apr 8, 2019 · 6 comments
Open

Support a no-op roll out #89

parhamdoustdar opened this issue Apr 8, 2019 · 6 comments
Labels
question Further information is requested

Comments

@parhamdoustdar
Copy link
Contributor

Right now, we watch for modifications to the Application object, and use those moments to create a new release.

However, the problem with this approach is that there is no way of forcing a rollout. This would be useful in cases where:

  • The user wants to restart all pods to refresh a value retrieved at runtime
  • The user wants us to re-fetch the chart with the same version

I'm not aware of us having come up with any solutions for this in the past, although there were a lot of discussions. The easiest option we came up with, however, which is not perfect, is people changing a label on their Application object so that technically, it would result in an update event and cause a new release.

Another idea we discussed was to have an annotation, like shipper.booking.com/noop, that people would set to true to cause a rollout to happen.

The problem with these approaches is that they are not intuitive and seem to be more of a hack to get around the issue than to solve it properly.

@alvrod
Copy link

alvrod commented Apr 8, 2019

From a usability point of view, shipper normally operates by having us merge a spec value into it with patch release; then having to do something similar to do a restart sounds within what should be expected.

Perhaps there could be a spec.targetVersion, same as we have a targetStep.
At first, a release is in version 0. If I want to do a rolling restart, I can
$ kubectl patch release $release -p '{"spec":{"targetVersion":1}}' --type="merge"
then shipper can perform the necessary steps to gradually create new pods to comply with this.
If I want to do it again I can
$ kubectl patch release $release -p '{"spec":{"targetVersion":2}}' --type="merge"
etc.

Analogously there can be a achievedVersion in the status which Shipper can use to track progress of the new version.

@juliogreff
Copy link
Contributor

kubectl 1.15 will come with kubectl rollout restart, so we'll have some prior art to base our work on: kubernetes/kubernetes#13488 (comment)

@ksurent
Copy link
Contributor

ksurent commented May 17, 2019

Another idea we discussed was to have an annotation, like shipper.booking.com/noop, that people would set to true to cause a rollout to happen.
The problem with these approaches is that they are not intuitive and seem to be more of a hack to get around the issue than to solve it properly.

FWIW, I don't think this is a hack. It's very much in the nature of Kubernetes. Set an annotation and bump its value every time. And just to make sure we make it even more accessible, this could be documented in FAQ. Certainly doesn't feel like it's something that'd justify changing API.

@parhamdoustdar
Copy link
Contributor Author

In support of what @ksurent said, kubectl rollout restart does something similar, setting the annotation kubectl.kubernetes.io/restartedAt to something like "2019-07-10T10:26:20+02:00".

Here is the relevant issue

@osdrv
Copy link
Contributor

osdrv commented Jul 15, 2019

This proposal is an very interesting topic from Shipper's contract perspective. Say, application A has 2 releases: R1 (deactivated) and R2 (active). In essence, Shipper guarantees to have transitioned application A state from R1 to R2 gracefully and safely. A no-op restart (say, R2*) breaks this contract in 2 places: it doesn't progress gracefully and it doesn't guarantee the state (R2*) would be stable (if we take into account the motivation for this change: re-fetching the chart and re-fetching some env value).

An alternative to this approach might be to let new distinct releases to be created from the same application object. In this case what effectively will happen is: all new values would be re-fetched and re-evaluated, the same version of chart would be re-fetched (with a tiny but: we cache charts aggressively. I assume, it's a side topic, but there are some issues with the approach when 2 distinct snapshots of chart might exist under the same version).

@osdrv osdrv added the question Further information is requested label Jul 15, 2019
@parhamdoustdar
Copy link
Contributor Author

parhamdoustdar commented Jul 15, 2019 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants