You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the node count is modified, a destroy triggered provisioned should attempt to drain and delete the node:
for example:
resource"metal_device""..." {
// ...provisioner"local-exec" {
when=destroy
command="kubectl -f ${kubeconfig} delete node ${self.hostname}"// we would want to drain/cordon first. can we get the kubeconfig path in this block?
}
}
The CCM (#64) would handle the eventual cleanup of deleted Terraform or UI deleted nodes, but this approach will allow for Terraform deleted nodes to be cleaned up less abruptly.
The text was updated successfully, but these errors were encountered:
I like this idea a lot. This should be doable-- in other locations, we've assumed the kubeadm admin kubeconfig (/etc/kubernetes/admin.conf), so I don't see why we couldn't here.
#90 addresses this in a best-effort (makes an attempt, reminds user to clean up manually if KUBECONFIG is not set, because a destroy triggered resource can't consume variables in such a way to allow me to push in a path) using local-exec; perhaps something the Kubernetes provider can address (though if I recall, it cannot delete a node).
When the node count is modified, a destroy triggered provisioned should attempt to drain and delete the node:
for example:
The CCM (#64) would handle the eventual cleanup of deleted Terraform or UI deleted nodes, but this approach will allow for Terraform deleted nodes to be cleaned up less abruptly.
The text was updated successfully, but these errors were encountered: