Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable using S3 as Terraform backend #1654

Closed
dusansusic opened this issue Apr 22, 2019 · 5 comments
Closed

enable using S3 as Terraform backend #1654

dusansusic opened this issue Apr 22, 2019 · 5 comments

Comments

@dusansusic
Copy link

Version

$ openshift-install version
openshift-install-darwin-amd64 v0.16.1
built from commit e3fceacc975953f56cb09931e6be015a36eb6075

Platform (aws|libvirt|openstack):

aws

Is there any plan to add S3 as terraform backend when provisioning Openshift on AWS?

right now, tfstate file is stored (by default) in current dir. It would be great if S3 bucket can be used for this.

@wking
Copy link
Member

wking commented Apr 22, 2019

right now, tfstate file is stored (by default) in current dir. It would be great if S3 bucket can be used for this.

Can you explain your use-case? Currently you can push the tfstate up to S3 whenever you like, so I don't see a need to bake that in to the installer itself.

@dusansusic
Copy link
Author

Pushing tfstate to S3 manually is one of the options, of course.

I am installing multiple clusters for development/testing purposes every day.
before ver 4.0 and this installer, i used similar approach to bootstrap openshift clusters:

  • terraform aws infrastructure
  • use s3 as backend to store config
  • run ansible installer

after cluster isn't needed anymore, i just run a job with clustername and destroy all related infrastructure.

I know i can use the same approach here but it's easier to store state file directly to S3 Bucket by terraform and not take care about it when provisioning cluster with this installer.

Your call, thanks anyways.

@wking
Copy link
Member

wking commented Apr 23, 2019

after cluster isn't needed anymore, i just run a job with clustername and destroy all related infrastructure.

The tfstate has nothing to do with cluster-teardown (because it doesn't know about resources the cluster created for itself after we've moved beyond teardown). You can almost clean up your cluster now with its infrastructure name (#1280, see here for a works-for-me example), and things like openshift/cluster-image-registry-operator#260 are working towards removing the remaining openshiftClusterID tags from operators.

You might also be interested in Hive, which allows you to store that metadata in a separate, controlling cluster for future removal. That's going to be more robust than recreating your metadata.json later, but you need that separate cluster to run Hive. Stashing the metadata in S3 is something of a middle ground. Currently my impression is that it's too niche to be worth working AWS-specific code in for metadata.json handling, so I'll close this issue, but feel free to continue discussion if you think there's a stronger case I'm missing.

/close

@openshift-ci-robot
Copy link
Contributor

@wking: Closing this issue.

In response to this:

after cluster isn't needed anymore, i just run a job with clustername and destroy all related infrastructure.

The tfstate has nothing to do with cluster-teardown (because it doesn't know about resources the cluster created for itself after we've moved beyond teardown). You can almost clean up your cluster now with its infrastructure name (#1280, see here for a works-for-me example), and things like openshift/cluster-image-registry-operator#260 are working towards removing the remaining openshiftClusterID tags from operators.

You might also be interested in Hive, which allows you to store that metadata in a separate, controlling cluster for future removal. That's going to be more robust than recreating your metadata.json later, but you need that separate cluster to run Hive. Stashing the metadata in S3 is something of a middle ground. Currently my impression is that it's too niche to be worth working AWS-specific code in for metadata.json handling, so I'll close this issue, but feel free to continue discussion if you think there's a stronger case I'm missing.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@faermanj
Copy link
Contributor

I understand the limitations mentioned but i do believe it would be worthy diving a bit deeper into terraform backend integration.

Or else, for users that already use terraform, they would need to use separate toolings (which is probably the point of terraform in the first place)....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants