-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform forcing replacement of ECS task definition - (only on task definitions with mount points) #11526
Comments
I'm having the same issue. I think it's ordering the custom environment variables, plus adding some default configurations if you don't already have them in your task definition. For my case I had to add these default options and reorder the environment variable according to the diff output. It'd be nice if terraform can
|
Going off of @moyuanhuang I also suspect the issue is the ordering of environment variables. I do NOT see the issue with secrets. One thing to note for my use case is I am changing the image of the task definition. So I do expect a new task definition to be created with the new image, but I do not expect to see a diff for unchanging environment variables. This makes evaluating diffs for task definitions extremely difficult. I notice the that AWS API + CLI do return these arrays in a consistent order (from what I can see), so perhaps this is something that Terraform or the provider itself is doing. |
Hi, I having the same issue without mount points. Exemple with docker health check :
|
@LeComptoirDesPharmacies That probably means there's a default value set for this particular config to be 5. However because you don't specify that config in your task definition so terraform thinks that you're trying to set it to
...to your task definition and you should be able to avoid terraform recreating the task. |
@moyuanhuang Yes thanks. |
I found that alphabetizing my env variables by name seems to keep it out of the plan. Noticed that the ecs task definition stores them that way in the json output in the console. |
I can confirm that alphabetizing like @jonesmac mentioned and adding in the items with their default values that terraform thinks has changed will resolve this, as a workaround. |
I have also got hit with this. |
The same thing happened using FluentBit log router. After adding its container definition, Terraform was forcing a new plan each time, even without touching anything: ~ {
- cpu = 0 -> null
- environment = [] -> null
- mountPoints = [] -> null
- portMappings = [] -> null
- user= "0" -> null
- volumesFrom = [] -> null
# (6 unchanged elements hidden)
}, After setting explicitly these values in the code, no more changes. Here's the FluentBit container definition: {
essential = true,
image = var.fluentbit_image_url,
name = "log_router",
firelensConfiguration = {
type = "fluentbit"
},
logConfiguration : {
logDriver = "awslogs",
options = {
awslogs-group = "firelens-container",
awslogs-region= var.region,
awslogs-create-group = "true",
awslogs-stream-prefix = "firelens"
}
},
memoryReservation = var.fluentbit_memory_reservation
cpu = 0
environment = []
mountPoints = []
portMappings = []
user = "0"
volumesFrom = []
} |
Hey y'all 👋 Thank you for taking the time to file this issue, and for the continued discussion around it. Given that there's been a number of AWS provider releases since the last update here, can anyone confirm if you're still experiencing this behavior? |
@justinretzolk I can tell you this is still an issue. We've been hitting it for almost a year now, and I've made a concerted effort over the last week or so to address it. These are our versions (should be the latest at the time of posting this):
Still actively going through the workarounds above trying to get this to work correctly, but so far no dice. |
@justinretzolk This issue still persists in the latest AWS provider version and should definitely need to be addressed. |
It's been a little while since I last posted. I've attempted to fix this a few times since my last post, but I've had no success so far. To provide a little more information, this is one of the ECS tasks that's being effected (sorry for the sanitization):
This is part of the plan output immediately after an apply. One attempt I made recently was just focused on getting 'cpu' above to stop appearing. Adding a Not sure what I'm doing wrong, but at this point we've begun dancing around the issue by using |
Hi Everyone, While upgrading from TF version 0.11.x to version 1.0.10, we ran into a similar issue. Though alphabetizing and setting the default values with null/empty values worked, it's a cumbersome process to refactor the task definitions with so many parameters.
|
Recently re-tested on the latest AWS provider 4.10. Issue still seems to be present. I did find another workaround for this though. It's not great, but I think it's better than what we'd been doing. Essentially it boils down to this:
This way standard plan/apply never causes the containers to restart (Unless some other attribute changed). If you need to force a restart/redeploy, taint, and then reapply. This is by no means a request for this issue, but I'm beginning to wish there was a way to override the check for resource replacement. So you could provide a sha or something, and if that changes it will update. Would make this a lot easier. |
For me, adding just {
essential = true
image = "public.ecr.aws/aws-observability/aws-for-fluent-bit:stable"
name = "log-router"
firelensConfiguration = {
type = "fluentbit"
options = {
enable-ecs-log-metadata = "true"
config-file-type = "file"
config-file-value = "/fluent-bit/configs/parse-json.conf"
}
}
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.api_log_group.name
awslogs-region = local.aws_region
awslogs-create-group = "true"
awslogs-stream-prefix = "firelens"
}
}
memoryReservation = 50
user = "0"
} |
I have same issue with the latest aws provider: 4.27.0 |
We also have this issue on terraform 1.2.7 and aws provider 4.31.0. The plan output is only marking With redactions:
No ENV variables changed, no secrets changed and no other configuration keys have been changed. |
A follow-up to my previous comment: The replacement was actually not caused by terraform not getting the diff correct. It was actually caused by a variable which was dependent on a file (data resource) which had a Our |
Still an issue with |
In my case I had:
TCP was being evaluated as tcp and on the second run terraform was not smart enough to recognize that "TCP" was going to be evaluated as "tcp" and was trying to replace the definition, I changed "TCP" for "tcp" and it stopped trying to replace it. |
Still an issue with 5.16.1 |
In my case, the recreation was caused by the
|
I've submitted a PR for the healthcheck defaults normalization specifically: #38872 |
This issue was originally opened by @im-lcoupe as hashicorp/terraform#23780. It was migrated here as a result of the provider split. The original body of the issue is below.
Summary
Hi there,
So this only seems to have become a problem since upgrading my code to the latest version - and strangely only seems to happen on the task definitions with mount points (however, the format of them hasn't changed...)
Terraform will constantly try and replace the two task definitions regardless of whether any changes have been made to them...
Any guidance on this would be greatly appreciated, as it means the task definition revision is changing on every run (which of course is not ideal)...
Terraform Version
Terraform Configuration Files - 1st problematic task definition - followed by the second
Second Task definition
Example plan output - to me it isn't clear what exactly it needs to change which requires a forced replacement - it looks to remove vars that are already there and then re add them? in other places it seems to re order them also. It also looks to be adding the network mode (awsvpc) which is already defined in the task definition?
Expected Behavior
Terraform should not try and replace the task definitions on every plan.
Actual Behavior
Terraform forces replacement of the task definition on every plan.
Steps to Reproduce
Terraform plan
The text was updated successfully, but these errors were encountered: