Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request - Delete "archive_file" and "local_file" files after they did their job. #18422

Closed
fahrenq opened this issue Jul 10, 2018 · 3 comments

Comments

@fahrenq
Copy link

fahrenq commented Jul 10, 2018

It would be cool if we could delete files that are left after "archive_file" data source (and probably local_file as well, but it's not in the use case I'll show below).

Use case with a workaround:
I want to deploy an "empty" lambda function because I have separate CI/CD pipeline for building and deploying lambda code. Here's a workaround I came up with:

HCL
data "archive_file" "empty" {
  type                    = "zip"
  source_content_filename = "lambda.js"
  output_path             = "${path.module}/empty.zip"

  source_content = <<EOF
module.exports.handler = (event, context, callback) => {
  callback(null, {
    statusCode: 500,
    headers: {},
    body: JSON.stringify({text: 'Empty function. Run deployment pipeline.'}),
  });
}
EOF
}

resource "aws_lambda_function" "express" {
  handler       = "lambda.handler"
  function_name = "${local.service_prefix}-express"

  filename = "${data.archive_file.empty.output_path}"
  role     = "${aws_iam_role.lambda.arn}"
  runtime  = "nodejs8.10"
}

resource "null_resource" "delete_empty_file" {
  depends_on = ["aws_lambda_function.express"]

  provisioner "local-exec" {
    command = "rm ${data.archive_file.empty.output_path}"
  }

  triggers {
    always = "${timestamp()}"
  }
}

It works fine except that:

  1. Requires confirming recreation of the new null_resource every time.
  2. On terraform plan - empty.zip is created and not destroyed.

Thanks.

@apparentlymart
Copy link
Contributor

Hi @fahrenq! Thanks for this feature request.

The archive_file data source is very weird and doesn't really behave properly. In retrospect, we regret adding it with its current design where it creates a file on disk, since it's not expected for a data source to have side-effects and so their lifecycle provides no opportunity for Terraform to "clean up".

We continue to support archive_file for now for compatibility, but it is likely to be deprecated in future. Instead we are recommending that build steps such as these should happen separately, before running Terraform, and the generated file be passed in either as an input variable or via another data source. This is analogous with e.g. using packer to produce an AMI before using Terraform to create an instance with that AMI. Terraform is a provisioning tool, not a build tool.

For aws_lambda_function in particular, we recommend (e.g. in our Lambda+API Gateway guide) a build step that generates a zip file and uploads it to S3, and then to use the S3 source type in aws_lambda_function rather than a local file. This is, as noted above, consistent with the idea of building an AMI with packer prior to using it in Terraform, or producing a docker image with docker build and uploading it into a docker repository before deploying it with ECS.

In your case, if I take your example literally, it looks like you may be doing something a little unusual here by uploading a "placeholder" archive which I assume later gets replaced by a deployment step that interacts directly with the Lambda API. Having both Terraform and some other tool manage the same object is not generally recommended, but since it looks like your placeholder code is hard-coded anyway I assume you could pre-build that zip file and just always use the same archive for every run, rather than generating it repeatedly for each deployment.

So with that said, it is not technically possible to clean up after archive_file in its current form as a data source. To support this would require introducing a new concept of a resource that exists only for the duration of a Terraform run, which is something we have considered before but do not intend to add in the near future since the use-cases of it tend to fall outside of Terraform's intended scope.

@fahrenq
Copy link
Author

fahrenq commented Jul 10, 2018

@apparentlymart Thank you very much for that detailed answer.
You're right, right now updating lambda code happens from the CodeBuild machine with CLI, later this will be replaced with CodeDeploy if we'll find lambda versioning feature useful.
Managing lambda function with terraform from the CodeBuild container also doesn't look like a good idea for me, it looks really scary that terraform will touch state file on every microservice iteration (even with -target option).
Anyway, thank you very much for the answer and all your work!

@ghost
Copy link

ghost commented Apr 4, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants