Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getOriginalModifiedCurrent failed: unexpected end of JSON input #136

Closed
sirlori opened this issue Oct 6, 2021 · 7 comments
Closed

getOriginalModifiedCurrent failed: unexpected end of JSON input #136

sirlori opened this issue Oct 6, 2021 · 7 comments

Comments

@sirlori
Copy link

sirlori commented Oct 6, 2021

I don't know what exactly is causing this, but with this Service:

kubectl get service  appsmith-editor -o json -n appsmith 
{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"name\":\"appsmith-editor\",\"namespace\":\"appsmith\"},\"spec\":{\"ports\":[{\"port\":80,\"targetPort\":80}],\"selector\":{\"app\":\"appsmith-editor\"}}}\n"
        },
        "creationTimestamp": "2021-08-16T18:08:24Z",
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:metadata": {
                        "f:annotations": {
                            ".": {},
                            "f:kubectl.kubernetes.io/last-applied-configuration": {}
                        }
                    },
                    "f:spec": {
                        "f:ports": {
                            ".": {},
                            "k:{\"port\":80,\"protocol\":\"TCP\"}": {
                                ".": {},
                                "f:port": {},
                                "f:protocol": {},
                                "f:targetPort": {}
                            }
                        },
                        "f:selector": {
                            ".": {},
                            "f:app": {}
                        },
                        "f:sessionAffinity": {},
                        "f:type": {}
                    }
                },
                "manager": "terraform-provider-kustomization_v0.5.0",
                "operation": "Update",
                "time": "2021-08-16T18:08:24Z"
            }
        ],
        "name": "appsmith-editor",
        "namespace": "appsmith",
        "resourceVersion": "88786750",
        "selfLink": "/api/v1/namespaces/appsmith/services/appsmith-editor",
        "uid": "6aa54af8-f63f-42df-a5de-5899f609ede8"
    },
    "spec": {
        "clusterIP": "172.20.13.41",
        "ports": [
            {
                "port": 80,
                "protocol": "TCP",
                "targetPort": 80
            }
        ],
        "selector": {
            "app": "appsmith-editor"
        },
        "sessionAffinity": "None",
        "type": "ClusterIP"
    },
    "status": {
        "loadBalancer": {}
    }
}

corresponding to the following kustomize yaml:

apiVersion: v1
kind: Service
metadata:
  name: appsmith-editor
spec:
  selector:
    app: appsmith-editor
  ports:
  - port: 80
    targetPort: 80

I can't successfully run a terraform plan because of this error:

│ Error: github.com/kbst/terraform-provider-kustomize/kustomize.kustomizationResourceDiff: apiVersion: "v1", kind: "Service", namespace: "appsmith" name: "appsmith-editor": getOriginalModifiedCurrent failed: unexpected end of JSON input
│ 
│   with module.webservices_cluster_k8s_base_deploy_appsmith.kustomization_resource.appsmith["~G_v1_Service|appsmith|appsmith-editor"],
│   on ../common/base_k8s_deployments/appsmith/appsmith.tf line 40, in resource "kustomization_resource" "appsmith":
│   40: resource "kustomization_resource" "appsmith" {

So then I modified this part of the code like this and rebuild the provider and used terraform init -plugin-dir= to run it:

       client := m.(*Config).Client
        cgvk := m.(*Config).CachedGroupVersionKind

        n, err := parseJSON(modifiedJSON)
        if err != nil {
                log.Printf("JSONERRORmod: %v %v", modifiedJSON, err)
                return nil, nil, nil, err
        }
        o, err := parseJSON(originalJSON)
        if err != nil {
                log.Printf("JSONERRORor: %v %v", originalJSON, err)
                return nil, nil, nil, err
        }

        setLastAppliedConfig(o, originalJSON)
        setLastAppliedConfig(n, modifiedJSON)

In the terraform apply output I can se the following:

  26097,108: 2021-10-06T18:22:53.966+0200 [DEBUG] provider.terraform-provider-kustomization_v0.5.0: 2021/10/06 18:22:53 JSONERRORmod:  unexpected end of JSON input
  26104,108: 2021-10-06T18:22:54.138+0200 [DEBUG] provider.terraform-provider-kustomization_v0.5.0: 2021/10/06 18:22:54 JSONERRORmod:  unexpected end of JSON input
  26124,108: 2021-10-06T18:22:54.661+0200 [DEBUG] provider.terraform-provider-kustomization_v0.5.0: 2021/10/06 18:22:54 JSONERRORmod:  unexpected end of JSON input
  26131,108: 2021-10-06T18:22:54.756+0200 [DEBUG] provider.terraform-provider-kustomization_v0.5.0: 2021/10/06 18:22:54 JSONERRORmod:  unexpected end of JSON input
  26142,108: 2021-10-06T18:22:55.174+0200 [DEBUG] provider.terraform-provider-kustomization_v0.5.0: 2021/10/06 18:22:55 JSONERRORmod:  unexpected end of JSON input

This made me think that I am passing an empty string to the manifest resource. It is not the case since I put the manifest content inside a resource "local_file" and the content is there:

{"apiVersion":"v1","kind":"Service","metadata":{"name":"appsmith-editor","namespace":"appsmith"},"spec":{"ports":[{"port":80,"targetPort":80}],"selector":{"app":"appsmith-editor"}}}

That said, after all that, I (by instinct, as it was the only json encoded field I could see) tried to delete the kubectl.kubernetes.io/last-applied-configuration annotation from the resource and the plan magically started working.
All the resources I have inside that namespace have the same problem, and after removing the kubectl.kubernetes.io/last-applied-configuration annotation they all started working again.

Please, let me know if I can be of any additional help 🙏

@markszabo
Copy link

We are getting the same error message from time to time:

kustomizationResourceDiff: apiVersion: "v1", kind: "Secret", namespace: "istio-system" name: "cacerts": getOriginalModifiedCurrent failed: unexpected end of JSON input

This is a pretty long Secret resource, but reading it with kubectl works. Also sometimes simply rerunning the pipeline makes the error go away.

@pst
Copy link
Member

pst commented Oct 8, 2021

@markszabo and I are in the same team. I'm looking into this issue, but I'm having a hard time reproducing it reliably. It only occurs periodically for our secret there.

@sirlori do you have a configuration that causes the issue every time? Please note, local_files may only be created during apply. So depending on a file created by local_file in the kustomzation is more likely to cause trouble.

@sirlori
Copy link
Author

sirlori commented Oct 8, 2021

@pst For me deploying that service using this provider always causes the issue. I will try to come up with a more well-defined one that reproduces this.
As for the local_file, I meant that I swapped the kustomization_resource with a local_file to see it the content of the manifest field was actually empty or not and because of that the local_file was not a dependency of any other resource

@pst
Copy link
Member

pst commented Oct 8, 2021

Thanks for the clarification. What's your K8s version, maybe it's an issue stemming from the K8s api version and SDK version the provider uses.

@sirlori
Copy link
Author

sirlori commented Oct 8, 2021

@pst
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.13-eks-8df270", GitCommit:"8df2700a72a2598fa3a67c05126fa158fd839620", GitTreeState:"clean", BuildDate:"2021-07-31T01:36:57Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}

Thanks for looking into this 🙏

@pst
Copy link
Member

pst commented Oct 8, 2021

@sirlori so one thing, if you removed the lastAppliedConfiguration annotation from the resource in the K8s api, the error is expected to happen every time. The provider can only handle resources that have the annotation at this time.

The bug I'm trying to hunt down in our case is, this error happening even though the resource has the annotation. Like it did in your case originally before you removed the annotation.

What I'm trying to find, is a way to reproduce how the issue happened the first time. Hope that explanation makes sense.

pst added a commit that referenced this issue Nov 6, 2021
Currently, the provider relies on the lastAppliedConfig
annotation. The read method reads the annotation and stores
its value as the manifest in the Terraform state. This means,
Terraform only detects drift for Kubernetes resources managed
by the Kustomization provider, if there is a diff between
what's in the lastAppliedConfig annotation and what's in the
Terraform files.

Not all `kubectl` commands however update the annotation,
e.g. scale doesn't, so such drift is never corrected, even
if replicas was specified in the Terraform files.

Additionally, there are a number of issues (e.g. #136) that
although I have a hard time reproducing them reliably, I
strongly suspect to be a result of the current implementation
here too.

This change, stop relying on the annotation and instead uses
similar but reverse patching logic to the diff and upate
method to determine which attributes of the configuration in
the Terraform files / from YAML are different on the API
server. This is then stored in the state. And now drift is
determined between the values of all attributes set in
TF/YAML and what the API last returned for them.
pst added a commit that referenced this issue Nov 7, 2021
Currently, the provider relies on the lastAppliedConfig
annotation. The read method reads the annotation and stores
its value as the manifest in the Terraform state. This means,
Terraform only detects drift for Kubernetes resources managed
by the Kustomization provider, if there is a diff between
what's in the lastAppliedConfig annotation and what's in the
Terraform files.

Not all `kubectl` commands however update the annotation,
e.g. scale doesn't, so such drift is never corrected, even
if replicas was specified in the Terraform files.

Additionally, there are a number of issues (e.g. #136) that
although I have a hard time reproducing them reliably, I
strongly suspect to be a result of the current implementation
here too.

This change, stop relying on the annotation and instead uses
similar but reverse patching logic to the diff and upate
method to determine which attributes of the configuration in
the Terraform files / from YAML are different on the API
server. This is then stored in the state. And now drift is
determined between the values of all attributes set in
TF/YAML and what the API last returned for them.
pst added a commit that referenced this issue Nov 7, 2021
Currently, the provider relies on the lastAppliedConfig
annotation. The read method reads the annotation and stores
its value as the manifest in the Terraform state. This means,
Terraform only detects drift for Kubernetes resources managed
by the Kustomization provider, if there is a diff between
what's in the lastAppliedConfig annotation and what's in the
Terraform files.

Not all `kubectl` commands however update the annotation,
e.g. scale doesn't, so such drift is never corrected, even
if replicas was specified in the Terraform files.

Additionally, there are a number of issues (e.g. #136) that
although I have a hard time reproducing them reliably, I
strongly suspect to be a result of the current implementation
here too.

This change, stop relying on the annotation and instead uses
similar but reverse patching logic to the diff and upate
method to determine which attributes of the configuration in
the Terraform files / from YAML are different on the API
server. This is then stored in the state. And now drift is
determined between the values of all attributes set in
TF/YAML and what the API last returned for them.
@pst
Copy link
Member

pst commented Jun 10, 2022

Closing this issue. We haven't seen it again since Oct last year. And there is no reproduction.

@pst pst closed this as completed Jun 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants