Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generic ephemeral volume with empty metadata causes panic #2198

Closed
ryanbarry opened this issue Jul 20, 2023 · 4 comments · Fixed by #2199
Closed

generic ephemeral volume with empty metadata causes panic #2198

ryanbarry opened this issue Jul 20, 2023 · 4 comments · Fixed by #2199
Assignees

Comments

@ryanbarry
Copy link

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.2.1
Kubernetes provider version: built from local clone at 5c076203ed31a84dbb943c0d448e1d28d8e285c9
Kubernetes version: v1.23.17

Affected Resource(s)

  • kubernetes_deployment

specifically, the generic ephemeral volumes support added to this in #2183

Terraform Configuration Files

resource "kubernetes_deployment" "deployment" {
    metadata {
        labels           = {
            "app" = "compass"
        }
        name             = "compass"
        namespace        = "default"
    }

    spec {
        min_ready_seconds         = 0
        paused                    = false
        progress_deadline_seconds = 600
        replicas                  = "1"
        revision_history_limit    = 10

        selector {
            match_labels = {
                "app" = "compass"
            }
        }

        strategy {
            type = "RollingUpdate"

            rolling_update {
                max_surge       = "25%"
                max_unavailable = "25%"
            }
        }

        template {
            metadata {
                labels     = {
                    "app" = "compass"
                }
            }

        spec {
                active_deadline_seconds          = 0
                automount_service_account_token  = false
                dns_policy                       = "ClusterFirst"
                enable_service_links             = true
                host_ipc                         = false
                host_network                     = false
                host_pid                         = false
                restart_policy                   = "Always"
                scheduler_name                   = "default-scheduler"
                service_account_name             = "default"
                share_process_namespace          = false
                termination_grace_period_seconds = 120

                container {
                    args                       = [
                        "--address=0.0.0.0:8443",
                    ]
                    command                    = []
                    image                      = ###REDACTED###
                    image_pull_policy          = "IfNotPresent"
                    name                       = "compass"
                    stdin                      = false
                    stdin_once                 = false
                    termination_message_path   = "/dev/termination-log"
                    termination_message_policy = "File"
                    tty                        = false

                    env {
                        name  = "NODE_ENV"
                        value = "development"
                    }

                    resources {
                        limits   = {
                            "cpu"    = "4"
                            "memory" = "16Gi"
                        }
                        requests = {
                            "cpu"    = "2"
                            "memory" = "16Gi"
                        }
                    }

                    volume_mount {
                        mount_path        = "/tmp"
                        mount_propagation = "None"
                        name              = "tmp"
                        read_only         = false
                    }
                }

                topology_spread_constraint {
                    max_skew           = 1
                    topology_key       = "topology.kubernetes.io/zone"
                    when_unsatisfiable = "DoNotSchedule"

                    label_selector {
                        match_labels = {
                            "app" = "compass"
                        }
                    }
                }
                topology_spread_constraint {
                    max_skew           = 1
                    topology_key       = "kubernetes.io/hostname"
                    when_unsatisfiable = "ScheduleAnyway"

                    label_selector {
                        match_labels = {
                            "app" = "compass"
                        }
                    }
                }

                volume {
                    name = "tmp"

                    ephemeral {
                        metadata {
                        }

                        spec {
                            access_modes       = [ "ReadWriteOnce" ]
                            storage_class_name = "fast-scratch"

                            resources {
                                requests = {
                                    "storage" = "100Gi"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

Debug Output

(i don't have this at hand, hopefully it's not necessary but i can come back and add it if it is)

Panic Output

module.k8s-deployment.kubernetes_deployment.deployment: Creating...                                              [576/1960]
╷
│ Error: Plugin did not respond
│
│   with module.k8s-deployment.kubernetes_deployment.deployment,
│   on ../../modules/k8s-deployment/main.tf line 304, in resource "kubernetes_deployment" "deployment":
│  304: resource "kubernetes_deployment" "deployment" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The
│ plugin logs may contain more details.
╵

Stack trace from the terraform-provider-kubernetes plugin:

panic: interface conversion: interface {} is nil, not map[string]interface {}

goroutine 150 [running]:
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.expandMetadata({0xc00173d620?, _, _})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/structures.go:47 +0x565
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.expandEphemeralVolumeSource({0xc00173d600?, 0xc001879b00?, 0$
1fdecc9?})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/structure_persistent_volume_spec.go:1260 +0xf9
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.expandVolumes({0xc00173d5e0, 0x1, 0x1fdade0?})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/structures_pod.go:1521 +0x1ca8
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.expandPodSpec({0xc00173d4f0, 0x1, 0x1fd8475?})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/structures_pod.go:821 +0xfc8
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.expandPodTemplate({0xc00173d450, 0x1, 0x1fddce3?})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/structures_deployment.go:119 +0x1e6
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.expandDeploymentSpec({0xc00173d310, 0x1, 0x0?})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/structures_deployment.go:100 +0x419
github.com/hashicorp/terraform-provider-kubernetes/kubernetes.resourceKubernetesDeploymentCreate({0x22f1668, 0xc0013eb9b0}$
 0xc000672880, {0x1f09340?, 0xc001610690})
        /home/rbarry/terraform-provider-kubernetes/kubernetes/resource_kubernetes_deployment.go:226 +0x1af
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc000a24000, {0x22f16a0, 0xc0010c4270}, 0xd$
, {0x1f09340, 0xc001610690})
        /home/rbarry/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/resource.go:707 +0x12e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000a24000, {0x22f16a0, 0xc0010c4270}, 0xc0$
1809520, 0xc000326f80, {0x1f09340, 0xc001610690})
        /home/rbarry/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/resource.go:837 +0xa85
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc00049b290, {0x22f1$
a0?, 0xc0010bdd70?}, 0xc0011de730)
        /home/rbarry/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/grpc_provider.go:1021 +$
xe8d
github.com/hashicorp/terraform-plugin-mux/tf5muxserver.muxServer.ApplyResourceChange({0xc000edd230, 0xc000edd290, {0xc0001$
0c00, 0x2, 0x2}, 0xc000edd260, 0xc000723e50, 0xc000fc8990, 0xc000edd2c0}, {0x22f16a0, ...}, ...)
        /home/rbarry/go/pkg/mod/github.com/hashicorp/terraform-plugin-mux@v0.7.0/tf5muxserver/mux_server_ApplyResourceChan$
e.go:27 +0x142
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc00024c820, {0x22f16a0?, 0xc0$
10bc180?}, 0xc000270070)
        /home/rbarry/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.14.2/tfprotov5/tf5server/server.go:818 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x1f026a0?, 0$
c00024c820}, {0x22f16a0, 0xc0010bc180}, 0xc000270000, 0x0)
        /home/rbarry/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.14.2/tfprotov5/internal/tfplugin5/tfplugin5_gr$
c.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0013521e0, {0x22f80f8, 0xc000bef040}, 0xc001cce000, 0xc000e6eba0, 0x32$
40a0, 0x0)
        /home/rbarry/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:1336 +0xd33
google.golang.org/grpc.(*Server).handleStream(0xc0013521e0, {0x22f80f8, 0xc000bef040}, 0xc001cce000, 0x0)
        /home/rbarry/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:1704 +0xa36
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        /home/rbarry/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /home/rbarry/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:963 +0x28a

Error: The terraform-provider-kubernetes plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Steps to Reproduce

  1. terraform apply

Expected Behavior

some error before the panic, given an empty metadata block. i added an empty metadata block because when i hadn't specified one within spec.template.spec.volume, i was given an error during planning that it's required ("Insufficient metadata blocks"). once i added the empty metadata block, planning succeeded just fine, but of course applying failed as above.

Actual Behavior

without defining the metadata block, i got an error explaining i needed one, but with an empty one everything seemed to work up until the panic

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@ryanbarry ryanbarry added the bug label Jul 20, 2023
@github-actions github-actions bot added the crash label Jul 20, 2023
@arybolovlev arybolovlev self-assigned this Jul 20, 2023
@arybolovlev
Copy link
Contributor

Hi @ryanbarry,

Thank you for reporting this issue! I will take care of it.

@ryanbarry
Copy link
Author

thanks @arybolovlev – let me know if there's assistance I can provide!

and while I'm here, thanks to you and the team for making it easy to build and use a random commit – i'm too impatient to wait for the release that incorporates this functionality, but i was pleasantly surprised by how simple it was to grab the latest main and integrate it 😄

@arybolovlev
Copy link
Contributor

arybolovlev commented Jul 21, 2023

Thank you, @ryanbarry!

I hope you don't use it in production. On thing that I have noticed is that the schema has one attribute missed.

Here is an example of how it looks now:

volume {
  name = "this"
  ephemeral {
    metadata {...} // REQUIRED
    spec {...}
  }
}

Here is how it should be:

volume {
  name = "this"
  ephemeral {
    volume_claim_template {
      metadata {...} // OPTIONAL
      spec {...}
    }
  }
}

Since it is not released yet, I am going to make this update. I still need to update a few files to mark PR ready for review, but all in all, I think the code changes are ready.

Your feedback is more than welcome! 🙏

Thanks!

@ryanbarry
Copy link
Author

not in production yet, and looks like a good change to me! i'll build a new version, update my usage with the new schema and report back if i have any issues 👨‍💻

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 27, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants