Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

fluxctl release --watch for HelmReleases #3019

Closed
langesven opened this issue Apr 23, 2020 · 1 comment
Closed

fluxctl release --watch for HelmReleases #3019

langesven opened this issue Apr 23, 2020 · 1 comment
Assignees
Labels
enhancement flux2 Resolution suggested - already fixed in Flux v2

Comments

@langesven
Copy link

PR #1525 implemented the helpful --watch command for fluxctl release to be able to monitor the rollout status of e.g. a deployment
This enables a user to use fluxctl release --watch in a blocking fashion to release a new image for a workload and see whether that rollout is successful or not. So we have instant feedback with a blocking call which is nice, because we don't need to poll anything afterwards to see whether the rollout worked or not.

As people already mentioned in that PR and I have noticed as well, sadly this behaviour is not true for HelmRelease workloads.
I'm using the podinfo HelmRelease workload from helm-operator-get-started for testing, the chart and podinfo.yaml are part of my git repository that flux-operator monitors.

# k get hr podinfo
NAME      RELEASE   STATUS     MESSAGE                       AGE
podinfo   podinfo   deployed   Helm release sync succeeded   21s
# fluxctl list-workloads | grep podinfo
services:deployment/podinfo                                        podinfo                                             stefanprodan/podinfo:dev-hdtwcel9                                                                   ready
services:helmrelease/podinfo                                       chart-image                                         stefanprodan/podinfo:dev-hdtwcel9                                                                   deployed  automated
# fluxctl release -n services --workload services:helmrelease/podinfo -v -w -i stefanprodan/podinfo:dev-kb9lm91e
Submitting release ...
WORKLOAD                      STATUS   UPDATES
services:helmrelease/podinfo  success  chart-image: stefanprodan/podinfo:dev-hdtwcel9 -> dev-kb9lm91e
Commit pushed:	16e8b99
Commit applied:	16e8b99
Monitoring rollout ...
WORKLOAD                      CONTAINER    IMAGE                              RELEASE   REPLICAS
services:helmrelease/podinfo  chart-image  stefanprodan/podinfo:dev-kb9lm91e  deployed  0/0 (0 outdated, 0 ready)

WORKLOAD                      CONTAINER    IMAGE                              RELEASE   REPLICAS
services:helmrelease/podinfo  chart-image  stefanprodan/podinfo:dev-kb9lm91e  deployed  0/0 (0 outdated, 0 ready)
# image tag switches here because automation is enabled. without automation the outcome is the same, just the tag doesn't change
WORKLOAD                      CONTAINER    IMAGE                              RELEASE   REPLICAS
services:helmrelease/podinfo  chart-image  stefanprodan/podinfo:dev-hdtwcel9  deployed  0/0 (0 outdated, 0 ready)

WORKLOAD                      CONTAINER    IMAGE                              RELEASE   REPLICAS
services:helmrelease/podinfo  chart-image  stefanprodan/podinfo:dev-hdtwcel9  deployed  0/0 (0 outdated, 0 ready)

WORKLOAD                      CONTAINER    IMAGE                              RELEASE   REPLICAS
services:helmrelease/podinfo  chart-image  stefanprodan/podinfo:dev-hdtwcel9  deployed  0/0 (0 outdated, 0 ready)
# this will repeat until ctrl+c

As such currently --watch on HelmReleases just ends up in an endless loop and it will not exit, nor really provide feedback about the rollout itself.

It would be nice if it could mimic the behaviour of helm upgrade --wait to basically block the fluxctl release --watch until the rollout was either successful or unsuccessful (both things flux/helm-operator monitor, since there is the option for automated rollbacks and such)
If this was the case we could more easily make it visible to our users whether or not their rollouts actually worked.
Currently they will publish a new image version and the pipeline will immediately turn green as the fluxctl release command has been issued -> user thinks everything is ok. To be sure that the actual rollout worked they would then need to check the status of the HelmRelease (deployed vs. failed) unless there was an automated rollback, then they need to checkhelm history as well to see information about that and so on.

It's very intransparent what's going on during releases and we'd like to make this more visible to give our users more confidence in the system so they don't need to debug why things they just released aren't working the way they thought they would, only to then figure out there was a deployment problem and their version isn't actually running.

@langesven langesven added blocked-needs-validation Issue is waiting to be validated before we can proceed enhancement labels Apr 23, 2020
@kingdonb kingdonb self-assigned this Feb 19, 2021
@kingdonb kingdonb added flux2 Resolution suggested - already fixed in Flux v2 and removed blocked-needs-validation Issue is waiting to be validated before we can proceed labels Feb 19, 2021
@kingdonb
Copy link
Member

This is great feedback, that Flux developer team has taken into account when making the next version of Flux, v2, when the dependsOn feature was added. See Health Assessment and Kustomization Dependencies.

There is a relevant discussion regarding the slight complication that Kustomize and HelmRelease are two different resources and there is no facility for cross-resource dependsOn. As you correctly noted, even v1 Helm Operator supported waiting for Helm Release to become ready. In Helm Controller, this is now exposed as kstatus health status and the Kustomization that deploys a HelmRelease can also do Health Assessment on one or more target HelmRelease, so the Kustomization will not become ready until HelmReleases are also ready.

This is a great feature request, but as development effort has focused on Flux v2, and v1 has gone to Maintenance mode, the possibility of adding it as a new feature to Flux v1 has passed. Apologies about the length of time that has elapsed since your inquiry. If you've been following our development efforts then of course we hope you are able to upgrade, here's more info on how to find support with that: https://fluxcd.io/support/

Closing. Thanks for using and contributing to Flux!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement flux2 Resolution suggested - already fixed in Flux v2
Projects
None yet
Development

No branches or pull requests

2 participants