Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create required resources prior initialising terraform state file #18597

Open
vutoff opened this issue Aug 3, 2018 · 4 comments
Open

Create required resources prior initialising terraform state file #18597

vutoff opened this issue Aug 3, 2018 · 4 comments

Comments

@vutoff
Copy link

vutoff commented Aug 3, 2018

Terraform Version

Terraform v0.11.7

Terraform Configuration Files

terraform {
  required_version = "0.11.7"

  backend "s3" {
    dynamodb_table = "terraform-state-lock"
    bucket     = "s3-tfstate-bucket"
    key           = "s3/key/terraform.tfstate"
    profile      = "aws_profile"
    region      = "eu-west-1"
  }
}

Context

First, I only use AWS so I'm not sure how this is solved in other backend providers.
I think it's a common problem that Terraform requires that all the resources part of the initial configuration should exist prior invoking terraform init. And that produce chicken-egg situation where something should be created and part of the state prior being able to initialise the same state.
Such resources are S3 bucket holding the state as well as DynamoDB table for managing locks.

Possible solution

Now it'd be great if we could tell terraform to create those resources for us prior initialising the state and further put those resources into the state.
Creating a bucket is usually idempotent operation (if the bucket already exist it will simply return true).
Same should apply to DynamoDB tables, though configuration there might differ for different use cases.

@apparentlymart
Copy link
Contributor

Hi @vutoff,

Terraform doesn't automatically create things during terraform init because those things would then need to be tracked somewhere, and there isn't yet anywhere to track it. As a rule, we don't create things that we can't track because otherwise it would violate the promises Terraform tries to make.

Users usually resolve this "chicken and egg" problem by either manually creating the few required basic resources (AWS account, S3 Bucket, DynamoDB table) either via the UI or via a CLI tool. Given that these objects will never need to change again and they have well-known names (as opposed to generated ids) a lot of users just let these objects remain untracked by Terraform indefinitely and just document the special names as part of a recovery guide.

If having everything managed by Terraform is important to you, you can bootstrap like this:

  • Write a Terraform configuration for the S3 bucket, DynamoDB Table, and any IAM objects you'll use to access them. Leave the backend entirely unconfigured for now.
  • Run terraform apply to create the relevant objects.
  • Once you're satisfied that they are working as expected, then add the backend "s3" block and run terraform init, which will prompt to confirm if you wish to migrate the state to S3.
  • Since S3 won't allow you to delete a bucket that has an object in it, and the state object was created by the backend code rather than by a Terraform resource, the S3 API will effectively block you from inadvertently destroying the S3 bucket without manual effort first to clean up all of the states from it.

We may be able to smooth this later by having a module on Terraform Registry for each of the backends that contains the necessary objects for it to work and then have a special mode in Terraform init to perform the above steps with that module. For now though, I expect it'd be more straightforward to just update the backend documentation to explain how to do the above bootstrapping flow.

@emoshaya
Copy link

emoshaya commented Oct 7, 2018

I'm also facing the same issue here

@egarbi
Copy link
Contributor

egarbi commented Mar 6, 2019

in the meanwhile, if you are using AWS, I would use a Cloudformation stack for the bucket and the dynamoDB.
You can then initialise running aws cloudformation create-stack

@deitch
Copy link

deitch commented Sep 14, 2019

I have been dealing with this as well. I ended up having a repo called myorg/infrastructure-bootstrap that has a script. The script uses the AWS CLI (natively or via docker, whatever is available) to create the minimum required resources: single S3 bucket, DynamoDB table, some role trusting between my users account and statefiles account. I then have infrastructure-statefiles where I manage statefiles (in a dedicated account) for all of my other various statefiles buckets and tables, and infrastructure-users where I manage users (in a dedicated account) and their roles and groups to access the rest. Finally we have a few repos that manage all of the "real" resources:

  • EC2/ASG/ELB
  • Vault deployments
  • etc.

Those are all duplicated across multiple environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants