Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not remove server side fields for datasources #1802

Merged
merged 3 commits into from
Nov 3, 2022

Conversation

St0rmingBr4in
Copy link
Contributor

Server side fields for datasources should not be deleted.

This allows reading the status field for the generic kubernetes_resource
datasource. Thus this PR Fixes #1699

@hashicorp-cla
Copy link

hashicorp-cla commented Aug 10, 2022

CLA assistant check
All committers have signed the CLA.

@alexsomesan
Copy link
Member

alexsomesan commented Aug 16, 2022

Hi and thanks for contributing this. The change looks good, I think this is a useful improvement.

However, did you actually run the test on your side?
AFAIK there is no status attribute on ConfigMap objects. There is no way the test can use ConfigMaps to validate this change.

My suggestion would be to base the test for this on a Deployment or something that exposes a status.

As soon as we have a working test, I think this is good to go.

Thanks!

@St0rmingBr4in
Copy link
Contributor Author

Yes, indeed, I'm making the change to test properly.

@St0rmingBr4in
Copy link
Contributor Author

St0rmingBr4in commented Aug 22, 2022

I'm not able to run the tests in the ./manifest/test/acceptance directory, maybe I'm doing something wrong.
I am able to run the acceptance tests in the ./kubernetes directory and they are passing so my kube setup is working
Here is the issue I'm facing:

TF_ACC=1 go test ./manifest/test/acceptance                                                   
package github.com/hashicorp/terraform-provider-kubernetes/manifest/test/acceptance: build constraints exclude all Go files in /Users/julien.doche/Documents/git-repos/terraform-provider-kubernetes/manifest/test/acceptance

Did not find documentation on how to run the tests in the ./manifest/test/acceptance directory.

As a workaround I tried to create a new test in the ./kubernetes directory but it is failing with

    data_source_kubernetes_resource_test.go:16: Step 2/2 error: Error running pre-apply refresh: exit status 1
        
        Error: Invalid data source
        
          on terraform_plugin_test.tf line 10, in data "kubernetes_resource" "test":
          10: data "kubernetes_resource" "test" {
        
        The provider hashicorp/kubernetes does not support data source
        "kubernetes_resource".

Not sure why, I believe the kubernetes_resource datasource is enabled by default so it should work.

@balpert89
Copy link

@alexsomesan can you look into this one? Issue #1699 blocks alot of users to actually use kubernetes_resource as a reliable source for inspecting a status (contrary as stated in #1548

@balpert89
Copy link

balpert89 commented Sep 15, 2022

I have tried to verify the patch to maybe solve a problem I face. @alexsomesan funnily enough the proposed change does not return the status of an object that definitely has one.

I have cloned the sourcecode and applied @St0rmingBr4in code changes, built the binaries and used them locally.
The result is: I can see dynamic fields (e.g. metadata.managedFields), but status is not in the returned object.

I have used the following:

main.tf

terraform {
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.13.1"
    }

  }
}

provider "kubernetes" {
  config_path = "resources/kube-config"
}

data "kubernetes_resource" "test-deployment" {
  api_version = "apps/v1"
  kind        = "Deployment"

  metadata {
    name      = "cert-manager"
    namespace = "cert-manager"
  }
}

outputs.tf

output "test" {
  value = data.kubernetes_resource.test-deployment.object.metadata.managedFields
}

Output:

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

test = [
  {
    "apiVersion" = "apps/v1"
    "fieldsType" = "FieldsV1"
    "fieldsV1" = {
      "f:metadata" = {
        "f:labels" = {
          "f:app" = {}
          "f:app.kubernetes.io/component" = {}
          "f:app.kubernetes.io/instance" = {}
          "f:app.kubernetes.io/name" = {}
          "f:app.kubernetes.io/version" = {}
          "f:kustomize.toolkit.fluxcd.io/name" = {}
          "f:kustomize.toolkit.fluxcd.io/namespace" = {}
        }
      }
      "f:spec" = {
        "f:replicas" = {}
        "f:selector" = {}
        "f:strategy" = {}
        "f:template" = {
          "f:metadata" = {
            "f:annotations" = {
              "f:prometheus.io/path" = {}
              "f:prometheus.io/port" = {}
              "f:prometheus.io/scrape" = {}
            }
            "f:creationTimestamp" = {}
            "f:labels" = {
              "f:app" = {}
              "f:app.kubernetes.io/component" = {}
              "f:app.kubernetes.io/instance" = {}
              "f:app.kubernetes.io/name" = {}
              "f:app.kubernetes.io/version" = {}
            }
          }
          "f:spec" = {
            "f:containers" = {
              "k:{\"name\":\"cert-manager\"}" = {
                "." = {}
                "f:args" = {}
                "f:env" = {
                  "k:{\"name\":\"POD_NAMESPACE\"}" = {
                    "." = {}
                    "f:name" = {}
                    "f:valueFrom" = {
                      "f:fieldRef" = {
                        "f:fieldPath" = {}
                      }
                    }
                  }
                }
                "f:image" = {}
                "f:imagePullPolicy" = {}
                "f:name" = {}
                "f:ports" = {
                  "k:{\"containerPort\":9402,\"protocol\":\"TCP\"}" = {
                    "." = {}
                    "f:containerPort" = {}
                    "f:name" = {}
                    "f:protocol" = {}
                  }
                }
                "f:resources" = {}
                "f:securityContext" = {
                  "f:allowPrivilegeEscalation" = {}
                }
              }
            }
            "f:nodeSelector" = {
              "f:kubernetes.io/os" = {}
            }
            "f:securityContext" = {
              "f:runAsNonRoot" = {}
            }
            "f:serviceAccountName" = {}
          }
        }
      }
    }
    "manager" = "kustomize-controller"
    "operation" = "Apply"
    "time" = "2022-08-31T14:24:40Z"
  },
  {
    "apiVersion" = "apps/v1"
    "fieldsType" = "FieldsV1"
    "fieldsV1" = {
      "f:metadata" = {
        "f:annotations" = {
          "." = {}
          "f:deployment.kubernetes.io/revision" = {}
        }
      }
      "f:status" = {
        "f:availableReplicas" = {}
        "f:conditions" = {
          "." = {}
          "k:{\"type\":\"Available\"}" = {
            "." = {}
            "f:lastTransitionTime" = {}
            "f:lastUpdateTime" = {}
            "f:message" = {}
            "f:reason" = {}
            "f:status" = {}
            "f:type" = {}
          }
          "k:{\"type\":\"Progressing\"}" = {
            "." = {}
            "f:lastTransitionTime" = {}
            "f:lastUpdateTime" = {}
            "f:message" = {}
            "f:reason" = {}
            "f:status" = {}
            "f:type" = {}
          }
        }
        "f:observedGeneration" = {}
        "f:readyReplicas" = {}
        "f:replicas" = {}
        "f:updatedReplicas" = {}
      }
    }
    "manager" = "kube-controller-manager"
    "operation" = "Update"
    "time" = "2022-08-31T14:24:46Z"
  },
]

When I change outputs.tf to:

output "test" {
  value = data.kubernetes_resource.test-deployment.object.status
}

An error is returned:

╷
│ Error: Unsupported attribute
│ 
│   on outputs.tf line 2, in output "test":
│    2:   value = data.kubernetes_resource.test-deployment.object.status
│     ├────────────────
│     │ data.kubernetes_resource.test-deployment.object is object with 4 attributes
│ 
│ This object does not have an attribute named "status".

To further debug I have added a check that appends an error when res.Oject["status"] is not nil:

Index: manifest/provider/datasource.go
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/manifest/provider/datasource.go b/manifest/provider/datasource.go
--- a/manifest/provider/datasource.go	(revision 83c23ce21474fa594ddf313f18210862197ba688)
+++ b/manifest/provider/datasource.go	(date 1663274491439)
@@ -148,8 +148,16 @@
 		return resp, nil
 	}
 
-	fo := RemoveServerSideFields(res.Object)
-	nobj, err := payload.ToTFValue(fo, objectType, th, tftypes.NewAttributePath())
+	if res.Object["status"] != nil {
+		resp.Diagnostics = append(resp.Diagnostics, &tfprotov5.Diagnostic{
+			Severity: tfprotov5.DiagnosticSeverityError,
+			Summary:  "DEBUG !!! The 'status' field on 'res.Object' is not empty !!! DEBUG",
+			Detail:   "Debug Error",
+		})
+		return resp, nil
+	}
+
+	nobj, err := payload.ToTFValue(res.Object, objectType, th, tftypes.NewAttributePath())
 	if err != nil {
 		resp.Diagnostics = append(resp.Diagnostics, &tfprotov5.Diagnostic{
 			Severity: tfprotov5.DiagnosticSeverityError,

And - surprise - I can see the error when running terraform apply:

╷
│ Error: DEBUG !!! The 'status' field on 'res.Object' is not empty !!! DEBUG
│ 
│   with data.kubernetes_resource.test-deployment,
│   on main.tf line 15, in data "kubernetes_resource" "test-deployment":
│   15: data "kubernetes_resource" "test-deployment" {
│ 
│ Debug Error
╵

Edit: I have also omitted the delete(in, "status") call in func RemoveServerSideFields just to be sure, but to no avail. The status is still not returned.

@balpert89
Copy link

@St0rmingBr4in @alexsomesan so after a lengthy debugging session I have found the problem - or rather the solution. The patch to fix (including changes by @St0rmingBr4in)

diff --git a/manifest/provider/datasource.go b/manifest/provider/datasource.go
--- a/manifest/provider/datasource.go	(revision 83c23ce21474fa594ddf313f18210862197ba688)
+++ b/manifest/provider/datasource.go	(date 1663322404109)
@@ -105,7 +105,7 @@
 	}
 	rcl := client.Resource(gvr)
 
-	objectType, th, err := s.TFTypeFromOpenAPI(ctx, gvk, false)
+	objectType, th, err := s.TFTypeFromOpenAPI(ctx, gvk, true)
 	if err != nil {
 		resp.Diagnostics = append(resp.Diagnostics, &tfprotov5.Diagnostic{
 			Severity: tfprotov5.DiagnosticSeverityError,
@@ -148,8 +148,7 @@
 		return resp, nil
 	}
 
-	fo := RemoveServerSideFields(res.Object)
-	nobj, err := payload.ToTFValue(fo, objectType, th, tftypes.NewAttributePath())
+	nobj, err := payload.ToTFValue(res.Object, objectType, th, tftypes.NewAttributePath())
 	if err != nil {
 		resp.Diagnostics = append(resp.Diagnostics, &tfprotov5.Diagnostic{
 			Severity: tfprotov5.DiagnosticSeverityError,

As you can see, the func call s.TFTypeFromOpenAPI(ctx, gvk, true) instructs the provider to get the status schema definition as well (with the true). With this, I am able to pull status information from an object.

@alexsomesan how can we proceed with the PR?

@St0rmingBr4in
Copy link
Contributor Author

Great I'll try applying the changes to test this on my part. If I'm not mistaken my patch worked without having to change the arguments to TFTypeFromOpenAPI. I'll dig a bit into that. For the test I'll try to give it another shot.

@St0rmingBr4in
Copy link
Contributor Author

Included @balpert89 changes and added a test using a kubernetes deployment as test data.

@alexsomesan
Copy link
Member

Sorry, this fell of my radar. I'm having a look at the updates in the next couple of days.

@St0rmingBr4in
Copy link
Contributor Author

@alexsomesan well if you could approve the workflows that would be great! ^^

@St0rmingBr4in
Copy link
Contributor Author

@alexsomesan Hey, have you found a bit of time to take a look at this ? ☺️

@St0rmingBr4in
Copy link
Contributor Author

@BBBmau Could you please approve the running workflows so that the CI runs on this change ? Thanks.

It does not make sense to delete server side fields for datasources, the user
should be responsible for not creating loops which would cause a perpetual
diff.

This allows reading the status field for the generic `kubernetes_resource`
datasource. Thus this PR Fixes this issue :
hashicorp#1699

Signed-off-by: Julien DOCHE <julien.doche@datadoghq.com>
Signed-off-by: Julien DOCHE <julien.doche@datadoghq.com>
Signed-off-by: Julien DOCHE <julien.doche@datadoghq.com>
@St0rmingBr4in
Copy link
Contributor Author

Nice, all green! Only the review is missing now 😊

Copy link
Member

@alexsomesan alexsomesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks a lot for this and sorry for the dragged out review.

@alexsomesan alexsomesan merged commit e2444cd into hashicorp:main Nov 3, 2022
@github-actions
Copy link

github-actions bot commented Dec 4, 2022

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Return status for kubernetes_resource
4 participants