-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
getOriginalModifiedCurrent failed: unexpected end of JSON input #136
Comments
We are getting the same error message from time to time:
This is a pretty long Secret resource, but reading it with |
@markszabo and I are in the same team. I'm looking into this issue, but I'm having a hard time reproducing it reliably. It only occurs periodically for our secret there. @sirlori do you have a configuration that causes the issue every time? Please note, |
@pst For me deploying that service using this provider always causes the issue. I will try to come up with a more well-defined one that reproduces this. |
Thanks for the clarification. What's your K8s version, maybe it's an issue stemming from the K8s api version and SDK version the provider uses. |
@pst Thanks for looking into this 🙏 |
@sirlori so one thing, if you removed the lastAppliedConfiguration annotation from the resource in the K8s api, the error is expected to happen every time. The provider can only handle resources that have the annotation at this time. The bug I'm trying to hunt down in our case is, this error happening even though the resource has the annotation. Like it did in your case originally before you removed the annotation. What I'm trying to find, is a way to reproduce how the issue happened the first time. Hope that explanation makes sense. |
Currently, the provider relies on the lastAppliedConfig annotation. The read method reads the annotation and stores its value as the manifest in the Terraform state. This means, Terraform only detects drift for Kubernetes resources managed by the Kustomization provider, if there is a diff between what's in the lastAppliedConfig annotation and what's in the Terraform files. Not all `kubectl` commands however update the annotation, e.g. scale doesn't, so such drift is never corrected, even if replicas was specified in the Terraform files. Additionally, there are a number of issues (e.g. #136) that although I have a hard time reproducing them reliably, I strongly suspect to be a result of the current implementation here too. This change, stop relying on the annotation and instead uses similar but reverse patching logic to the diff and upate method to determine which attributes of the configuration in the Terraform files / from YAML are different on the API server. This is then stored in the state. And now drift is determined between the values of all attributes set in TF/YAML and what the API last returned for them.
Currently, the provider relies on the lastAppliedConfig annotation. The read method reads the annotation and stores its value as the manifest in the Terraform state. This means, Terraform only detects drift for Kubernetes resources managed by the Kustomization provider, if there is a diff between what's in the lastAppliedConfig annotation and what's in the Terraform files. Not all `kubectl` commands however update the annotation, e.g. scale doesn't, so such drift is never corrected, even if replicas was specified in the Terraform files. Additionally, there are a number of issues (e.g. #136) that although I have a hard time reproducing them reliably, I strongly suspect to be a result of the current implementation here too. This change, stop relying on the annotation and instead uses similar but reverse patching logic to the diff and upate method to determine which attributes of the configuration in the Terraform files / from YAML are different on the API server. This is then stored in the state. And now drift is determined between the values of all attributes set in TF/YAML and what the API last returned for them.
Currently, the provider relies on the lastAppliedConfig annotation. The read method reads the annotation and stores its value as the manifest in the Terraform state. This means, Terraform only detects drift for Kubernetes resources managed by the Kustomization provider, if there is a diff between what's in the lastAppliedConfig annotation and what's in the Terraform files. Not all `kubectl` commands however update the annotation, e.g. scale doesn't, so such drift is never corrected, even if replicas was specified in the Terraform files. Additionally, there are a number of issues (e.g. #136) that although I have a hard time reproducing them reliably, I strongly suspect to be a result of the current implementation here too. This change, stop relying on the annotation and instead uses similar but reverse patching logic to the diff and upate method to determine which attributes of the configuration in the Terraform files / from YAML are different on the API server. This is then stored in the state. And now drift is determined between the values of all attributes set in TF/YAML and what the API last returned for them.
Closing this issue. We haven't seen it again since Oct last year. And there is no reproduction. |
I don't know what exactly is causing this, but with this Service:
corresponding to the following kustomize yaml:
I can't successfully run a terraform plan because of this error:
So then I modified this part of the code like this and rebuild the provider and used
terraform init -plugin-dir=
to run it:In the terraform apply output I can se the following:
This made me think that I am passing an empty string to the manifest resource. It is not the case since I put the manifest content inside a
resource "local_file"
and the content is there:That said, after all that, I (by instinct, as it was the only json encoded field I could see) tried to delete the
kubectl.kubernetes.io/last-applied-configuration
annotation from the resource and the plan magically started working.All the resources I have inside that namespace have the same problem, and after removing the
kubectl.kubernetes.io/last-applied-configuration
annotation they all started working again.Please, let me know if I can be of any additional help 🙏
The text was updated successfully, but these errors were encountered: