-
-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider produced an invalid new value for .diff_output #28
Comments
@mnatan Thanks for reporting! I thought this has been fixed by 2460c9a#diff-769700fba47e9257c615acae835ad9d3 but apparently I need to take a deeper look. Regarding the possible solution proposed by you - thanks anyway - but I think that's what the provider does today. In nutshell it uses a sha256sum computed from the output of It was originally using |
Just curious, but is your |
Many thanks @mumoshu for good explanation. Because of that, we were able to locate why the cache you implemented was not working for us. We are using Jenkins helm chart, and it randomly generates a pod name for testing. See here. As you expected, it causes the We disabled this test for now and everything works perfectly. Before we close this issue, shall we handle this case somehow? |
@mnatan Thank you so much for confirming! Glad to hear it worked.
I believe so - Random pod names in hooks sounds perfectly valid use-cases to me. I have two options in my mind, but not sure which one is best:
I slightly prefer the latter. But I'm not exactly if that's enough. Can we safely say that only K8s resources created by hooks contain random values? |
How does that fix the issue? If we generate a random pod name, enhanced
I do not think so. Any random value will cause this issue and I think people could easily use it outside hooks as well. I think we can either:
|
@mnatan Thanks for confirming!
No. |
Oh, I see. This is the best idea in my opinion then. 👍 |
The progress: roboll/helmfile#1436 |
… Helmfile v0.126.0 or greater Ref #28
@mnatan Thanks again for reviewing the helmfile pr! I just released v0.4.0 for this. With helmfile v0.126.0 or greater installed, the provider now uses Would you mind trying it? |
Hi @mumoshu, I'll try this out this week. I assume v0.4.0 breaks with older versions of helmfile? Maybe the provider should check the version of helmfile and warn about this incompatibility? |
@mnatan Thanks! The provider does check the helmfile version to determine if it should use |
389eee7 is the final solution for this. Before this commit, the provider was still emitting this error when you used hemfile's kustomize integration or values.yaml.tmpl, which uses a random directory to store temporary charts and values files generated by helmfile, whose random files paths are shown in diff_output which breaks terraform. |
Note that the above enhancement requires the provider v0.12.0 or greater and helmfile v0.136.0 or greater that includes roboll/helmfile#1622 |
Similar to #56 |
Hi @mumoshu, firstly thanks for the great work. We are running 0.14.1 provider with v0.142.0 but still seeing the same issue ie |
terraform-provider-helmfile version:
v0.3.14
helmfile version:
helmfile-0.125.5.high_sierra.bottle.tar.gz
Overview
Quite often we can't create larger helm charts due to inconsistent
diff_output
being generated by plan and apply.When comparing the generated diffs, it usually comes down to the order of manifests being presented. When deploying a chart to a fresh environment with more than 15 manifests, it becomes impossible to deploy, as it fails every time.
Additionally, the issue gets worse when helmfile config contains multiple releases. The order of these releases tends to vary more often in the diff output.
Example output
Already discussed here
Possible solutions
diff
part twice. How about caching the first run in the module's global variable, and then later returning the same result?The text was updated successfully, but these errors were encountered: