Kubeconfig state unavailable, Terraform defaults to localhost [bug] #992
Labels
area: terraform 💾
needs: discussion 💬
Needs discussion with the rest of the team
type: enhancement 💅🏼
New feature or request
Seen most often in the upgrade from 0.3.12 to main (0.4 candidate) of an AWS cluster, Terraform kubernetes provider sometimes tries to access the Kubernetes cluster at localhost instead of the EKS host.
The problem is with these two QHub Terraform modules:
kubernetes
andkubernetes-initialization
.The
kubernetes
module creates the cluster itself, and thenkubernetes-initialization
creates the namespace inside the cluster (plus some secrets).kubernetes-initialization
is configured like this:At the moment, there is a
terraform apply
step run targeting both these modules at the same time. The problem is that, if the state of thekubernetes
module is not known (e.g. not refreshed in the right way at the right time), there is no way we can expect thekubernetes-initialization
to have the appropriate provider configured correctly - because that depends on outputs of the earlier module.We do set
kubernetes-initialization
todepends_on
thekubernetes
module, soterraform apply
shouldn't try to create the namespace before the cluster exists, for example, but the provider needs to be configured at the start of the wholeterraform apply
step, otherwise it's as though we never passed through the settings such ashost
so it defaults to localhost.The best description of this nuance is probably here.
Anyway, I will try splitting out the two modules into separate steps. That means an extra
terraform apply
call of course, and it might be better to see if we can instead pushkubernetes-initialization
into a joint step with the following module (kubernetes-ingress
) if we set up appropriatedepends_on
hierarchies.But in general, we need a proper multistage definition as per issue #847 .
The text was updated successfully, but these errors were encountered: