-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EMR configurations field forcing new EMR cluster when path changes #543
Comments
I'm seeing a similar issue without specifying a This example causes a destroy on every resource "aws_emr_cluster" "emr-cluster" {
name = "${length(var.name) > 0 ? var.name : var.tags["environment"]}-emr-cluster"
release_label = "${var.release_label}"
applications = "${split(",",var.applications)}"
termination_protection = "${var.termination_protection}"
keep_job_flow_alive_when_no_steps = "${var.keep_job_flow_alive_when_no_steps}"
ec2_attributes {
key_name = "${var.key_name}"
subnet_id = "${element(split(",", var.private_subnets), count.index)}"
additional_master_security_groups = "${aws_security_group.emr-cluster-master.id}"
additional_slave_security_groups = "${aws_security_group.emr-cluster-slaves.id}"
instance_profile = "${var.use_default_aws_instance_profile == true ? format("arn:aws:iam::%s:instance-profile/%s",var.aws_account_id,var.default_aws_instance_profile) : aws_iam_instance_profile.emr-profile.arn}"
}
master_instance_type = "${var.master_instance_type}"
core_instance_type = "${var.core_instance_type}"
core_instance_count = "${var.core_instance_count}"
log_uri = "${module.s3.site_name}/logs"
visible_to_all_users = "${var.visible_to_all_users}"
tags = "${var.tags}"
service_role = "${var.use_default_aws_role == true ? format("arn:aws:iam::%s:role/%s",var.aws_account_id,var.default_aws_role) : aws_iam_role.iam_emr_service_role.arn}"
} |
@cpower ☝️ |
Turns out my issue was related to the |
This relates to #1385 and will likely require creating a new attribute that accepts JSON input instead of a file path/URL (deprecating the existing one) to fix it. |
In the meantime, a less than ideal workaround for this particular case can be done using the lifecycle ignore_changes configuration which will allow the file to be in two different paths across workstations or move the file path/URL without resource recreation: resource "aws_emr_cluster" "example" {
# ... other configuration ...
lifecycle {
ignore_changes = ["configurations"]
}
} |
Version 1.30.0 of the AWS provider, releasing later today, will have a new To convert to the new attribute, given this Terraform configuration: resource "aws_emr_cluster" "example" {
# ... other configuration ...
configurations = "example.json"
} Update it to: resource "aws_emr_cluster" "example" {
# ... other configuration ...
configurations_json = "${file("example.json")}"
} |
This has been released in version 1.30.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
This issue was originally opened by @cpower as hashicorp/terraform#12047. It was migrated here as part of the provider split. The original body of the issue is below.
Hi there,
Running
terraform apply
on a Terraform managed EMR cluster that uses a path.module reference to configurations.json with remote state, when the path to the file changes, causes Terraform to destroy the EMR cluster and create a new one because the path to the configurations.json file is stored in the Terraform state which forces a new resource.Terraform Version
Terraform v0.8.5
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
Updating a Terraform managed EMR cluster that uses a path.module reference to a configurations JSON file from a different path/machine when using remote state should not create and destroy a new EMR cluster.
Actual Behavior
Running
terraform apply
on a Terraform managed EMR cluster that uses a path.module reference, causes Terraform to destroy the EMR cluster and create a new one because the path to the configurations.json file is stored in the Terraform state.Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
to create an EMR cluster on MachineA that has a path.module reference to a configurations.json file (use remote state)terraform apply
, Terraform should destroy the EMR cluster and create a new oneReferences
Related Issues
hashicorp/terraform#8754
hashicorp/terraform#7927
The text was updated successfully, but these errors were encountered: