You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When creating a coderd_template for the first time, it looks like it fails with the max_port_share_level attribute set to "owner" even though this should be the default. The template creates fine, but then the following error occurs:
│ Error: Client Error
│
│ with module.k8s-devcontainer.coderd_template.template,
│ on modules/coderd-template/main.tf line 21, in resource "coderd_template" "template":
│ 21: resource "coderd_template" "template" {
│
│ Failed to set max port share level via update: template metadata not modified
If I create a template without setting max_port_share_level initially then change it, things seem to work out fine.
I set TF_LOGS to TRACE to see if I can find anything interesting, but it looks like things are actually running fine (aside from the request returning ERROR, but it just returns the same issue). I'm wondering if the response ("template metadata not modified") is actually returning an incorrect return code since there shouldn't be any changes to the template:
2025-02-24T11:50:05.892-0800 [TRACE] provider.terraform-provider-coderd_v0.0.9: Received downstream response: tf_resource_type=coderd_template tf_rpc=ApplyResourceChange @module=sdk.proto tf_req_duration_ms=13552 tf_proto_version=6.7 tf_provider_addr=registry.terraform.io/coder/coderd tf_req_id=aef2d974-79f1-8e45-1f96-925b53e113a8 @caller=github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov6/internal/tf6serverlogging/downstream_request.go:42 diagnostic_error_count=1 diagnostic_warning_count=0 timestamp=2025-02-24T11:50:05.892-0800
2025-02-24T11:50:05.892-0800 [ERROR] provider.terraform-provider-coderd_v0.0.9: Response contains error diagnostic: @module=sdk.proto diagnostic_summary="Client Error" tf_req_id=aef2d974-79f1-8e45-1f96-925b53e113a8 tf_resource_type=coderd_template @caller=github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov6/internal/diag/diagnostics.go:58 diagnostic_severity=ERROR tf_proto_version=6.7 tf_provider_addr=registry.terraform.io/coder/coderd tf_rpc=ApplyResourceChange diagnostic_detail="Failed to set max port share level via update: template metadata not modified" timestamp=2025-02-24T11:50:05.892-0800
2025-02-24T11:50:05.892-0800 [TRACE] provider.terraform-provider-coderd_v0.0.9: Served request: @module=sdk.proto tf_provider_addr=registry.terraform.io/coder/coderd tf_resource_type=coderd_template tf_proto_version=6.7 tf_req_id=aef2d974-79f1-8e45-1f96-925b53e113a8 tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/terraform-plugin-go@v0.25.0/tfprotov6/tf6server/server.go:878 timestamp=2025-02-24T11:50:05.892-0800
2025-02-24T11:50:05.893-0800 [TRACE] maybeTainted: module.k8s-devcontainer.coderd_template.template encountered an error during creation, so it is now marked as tainted
2025-02-24T11:50:05.893-0800 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/coder/coderd" is in the global cache
2025-02-24T11:50:05.893-0800 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for module.k8s-devcontainer.coderd_template.template
2025-02-24T11:50:05.893-0800 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: removing state object for module.k8s-devcontainer.coderd_template.template
2025-02-24T11:50:05.893-0800 [TRACE] evalApplyProvisioners: module.k8s-devcontainer.coderd_template.template is tainted, so skipping provisioning
2025-02-24T11:50:05.893-0800 [TRACE] maybeTainted: module.k8s-devcontainer.coderd_template.template was already tainted, so nothing to do
2025-02-24T11:50:05.893-0800 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/coder/coderd" is in the global cache
2025-02-24T11:50:05.893-0800 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for module.k8s-devcontainer.coderd_template.template
2025-02-24T11:50:05.893-0800 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: removing state object for module.k8s-devcontainer.coderd_template.template
2025-02-24T11:50:05.894-0800 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2025-02-24T11:50:05.894-0800 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2025-02-24T11:50:05.894-0800 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2025-02-24T11:50:05.907-0800 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2025-02-24T11:50:05.907-0800 [ERROR] vertex "module.k8s-devcontainer.coderd_template.template" error: Client Error
2025-02-24T11:50:05.907-0800 [TRACE] vertex "module.k8s-devcontainer.coderd_template.template": visit complete, with errors
2025-02-24T11:50:05.907-0800 [TRACE] dag/walk: upstream of "module.k8s-devcontainer (close)" errored, so skipping
2025-02-24T11:50:05.907-0800 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/coder/coderd\"] (close)" errored, so skipping
2025-02-24T11:50:05.907-0800 [TRACE] dag/walk: upstream of "root" errored, so skipping
2025-02-24T11:50:05.907-0800 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/coder/coderd" is in the global cache
2025-02-24T11:50:05.907-0800 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/hashicorp/external" is in the global cache
2025-02-24T11:50:05.907-0800 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/hashicorp/archive" is in the global cache
2025-02-24T11:50:05.908-0800 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2025-02-24T11:50:05.908-0800 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2025-02-24T11:50:05.908-0800 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
The text was updated successfully, but these errors were encountered:
When creating a
coderd_template
for the first time, it looks like it fails with themax_port_share_level
attribute set to "owner" even though this should be the default. The template creates fine, but then the following error occurs:If I create a template without setting max_port_share_level initially then change it, things seem to work out fine.
I set TF_LOGS to TRACE to see if I can find anything interesting, but it looks like things are actually running fine (aside from the request returning ERROR, but it just returns the same issue). I'm wondering if the response ("template metadata not modified") is actually returning an incorrect return code since there shouldn't be any changes to the template:
The text was updated successfully, but these errors were encountered: