-
Notifications
You must be signed in to change notification settings - Fork 42
Add a terraform subcommand #134
Comments
Here's what I'm planning for the commands to look like: # After targeting a shoot cluster
# Initialize terraform working directory.
# The directory will be initialized under ~/.garden/cache/<garden-cluster>/seeds/<seed-name>/<shoot-name>/terraform
# or ~/.garden/cache/<garden-cluster>/projects/<project-name>/<shoot-name>/terraform
# All relevant terraform configmaps will be downloaded in that directory. Secrets will remain in the cluster
$ gardenctl terraform init
# Import an existing resource to the terraform state.
$ gardenctl terraform import google_compute_network.network <shoot-network-name>
# Push the modified state to the control plane of the shoot.
# Pushing will show a diff between the local and remote states and a confirmation will be necessary before continuing.
# Confirmation can be skipped by specifying -f flag.
$ gardenctl terraform push @vlerenc @DockToFuture wdyt? |
Sounds good to me! Regarding the first part, we have already implemented a download function for the terraform files, see |
Hmm, I am with mixed feelings about it. With Personally I would go with |
The tf version validation is crucial here, if we don't want to break things up. And even with a valid validation such commands should only be used with precaution. But there shouldn't be a difference if you do it in an automated way or manual. |
@ialidzhikov this is one of the reasons I decided to open the issue as a hack/script for the ops-toolbelt after initially considering it for gardenctl. Generally, running The convenient thing you would get from gardenctl when using terraform is easier credential management and automatic terraform working directory creation with a consistent path (depending on gardener, project, seed and shoot clusters) which we already have with gardenctl download tf. |
What would you like to be added:
We should add scripts that can ease work with terraform states.
Why is this needed:
Sometimes there are errors caused by terraform job pods:
To fix these the operator has to either manually delete the created resources on the cloud provider or manually update the terraform state configmaps kept in the Shoot's namespace on the Seed.
To ease this work we can introduce a couple of scripts which take care of automatically setting up the terraform work directory and properly updating the state's configmap.
The text was updated successfully, but these errors were encountered: