Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TaskDefinition Always Replace When Default Values Not Included in ContainerDefinitions #1985

Closed
mattsleiman-discovery opened this issue May 1, 2020 · 7 comments · Fixed by #4126
Assignees
Labels
awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). bug/diff kind/bug related to Pulumi generating wrong diffs on preview or up. kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed service/ecs ECS related things
Milestone

Comments

@mattsleiman-discovery
Copy link

I have an ECS service that is having it's task definition replaced every time I run pulumi up even if no changes are made to the task definition. The diff that pulumi is reporting shows the only change being the container definitions string. It appears that the container definitions string in the stack's state has default values for several fields filled in (e.g. cpu=0, mountPoints=[], volumesFrom=[], etc.) while my code does not. The comparison isn't adding those default values before comparing the two strings so it reports they are different. I've manually added all the default values and now pulumi up correctly reports there are no changes. Is it possible to have those default values considered when doing the comparison so I don't have to make sure they are always present in my code? I'm using version 1.14.1 with the golang SDK.

@leezen
Copy link
Contributor

leezen commented May 3, 2020

Do you mind sharing a code snippet of how you're creating the task definition?

@mattsleiman-discovery
Copy link
Author

After a bit more investigation I've narrowed it down to just the "user" field in the log routing container definition that I've commented out in the code below. If I don't include that, pulumi up always reports a diff, recreates the task definition, and updates the service to use the new task definition. If I include that value then it correctly reports no changes.

containerDefs := pulumi.All(image.ImageName, args.EnvironmentVariables).ApplyString(func(inputs []interface{}) (string, error) {
	imageName := inputs[0].(string)
	envVars := inputs[1].(map[string]interface{})
	datadogAPIKey := pulumiConfig.Require(ctx, "datadog:apiKey")
	environmentName := pulumiConfig.New(ctx, "").Require("env")

	appContainer := stringMap{
		"name":      "app",
		"image":     imageName,
		"essential": true,
		"portMappings": []stringMap{{
			"containerPort": 80,
			"protocol":      "tcp",
		}},
		"environment": toEnvironmentVariableArray(envVars),
		"logConfiguration": stringMap{
			"logDriver": "awsfirelens",
			"options": stringMap{
				"Name":           "datadog",
				"Host":           "http-intake.logs.datadoghq.com",
				"apikey":         datadogAPIKey,
				"dd_service":     "svc",
				"dd_message_key": "log",
				"dd_tags":        fmt.Sprintf("env:%s,stack:%s", environmentName, ctx.Stack()),
				"TLS":            "on",
				"provider":       "ecs",
				"compress":       "gzip",
			},
		},
	}

	datadogAgentContainer := stringMap{
		"name":      "datadog-agent",
		"image":     "datadog/agent:latest",
		"essential": true,
		"environment": toEnvironmentVariableArray(stringMap{
			"DD_API_KEY":                     datadogAPIKey,
			"ECS_FARGATE":                    "true",
			"DD_DOGSTATSD_NON_LOCAL_TRAFFIC": "true",
			"DD_TAGS":                        fmt.Sprintf("env:%s stack:%s", environmentName, ctx.Stack()),
		}),
	}

	logRoutingContainer := stringMap{
		"name":      "log-router",
		"image":     "amazon/aws-for-fluent-bit:latest",
		"essential": true,
		"firelensConfiguration": stringMap{
			"type": "fluentbit",
			"options": stringMap{
				"enable-ecs-log-metadata": "true",
			},
		},
		//"user": "0",
	}

	jsonBytes, err := json.Marshal([]stringMap{appContainer, datadogAgentContainer, logRoutingContainer})
	if err != nil {
		return "", nil
	}

	return string(jsonBytes), nil
})

appTask, err := ecs.NewTaskDefinition(ctx, utils.CreateResourceName(ctx, "app-task"), &ecs.TaskDefinitionArgs{
	Family:                  pulumi.String("fargate-task-definition"),
	Cpu:                     pulumi.String("256"),
	Memory:                  pulumi.String("512"),
	NetworkMode:             pulumi.String("awsvpc"),
	RequiresCompatibilities: pulumi.StringArray{pulumi.String("FARGATE")},
	ExecutionRoleArn:        args.TaskExecRoleArn.ToStringOutput(),
	ContainerDefinitions:    containerDefs,
	Tags: pulumi.Map{
		"project": pulumi.String("gtp"),
	},
})

@ifokeev
Copy link

ifokeev commented May 17, 2020

Same problem in nodejs project here. Investigating...

@ifokeev
Copy link

ifokeev commented May 17, 2020

So, I see the container outputs different ENV, which doesn't merge with the user environments.
This is the state of the container with the https://github.com/mesudip/nginx-proxy image. It requires me to know all the ENV state.

Screenshot 2020-05-17 at 18 38 19

            {
                "urn": "urn:pulumi:dev::mlcraft-services::docker:index/container:Container::mlcraft-nginx",
                "custom": true,
                "id": "a1e0ebe86b71768a2b2804ae476a1efec31fd03e68eefec13731b46f1db79182",
                "type": "docker:index/container:Container",
                "inputs": {
                    "__defaults": [
                        "attach",
                        "logDriver",
                        "logs",
                        "mustRun",
                        "name",
                        "readOnly",
                        "rm",
                        "start"
                    ],
                    "attach": false,
                    "capabilities": {
                        "__defaults": []
                    },
                    "envs": [],
                    "image": "sha256:4cee36ea92e54abe51d1c078944b213614d71c5458106779e43fcedda6e6edff",
                    "logDriver": "json-file",
                    "logs": false,
                    "mustRun": true,
                    "name": "mlcraft-nginx-f687c93",
                    "networksAdvanced": {
                        "4dabf18193072939515e22adb298388d": "1b47061264138c4ac30d75fd1eb44270",
                        "ciphertext": "AAABADUzP3tlqo5JZVUZticWiipAckKweG8jLH5L9A11YPdjykF5zzjW4kpTLzmIVuYec1sFXHGj4ETeI5CBwuqauC0VVTHU5dXw8LXbTo+JT+K0BiKORqfTpPuVvHE9VSYZKg=="
                    },
                    "ports": [
                        {
                            "__defaults": [
                                "ip",
                                "protocol"
                            ],
                            "external": 80,
                            "internal": 80,
                            "ip": "0.0.0.0",
                            "protocol": "tcp"
                        },
                        {
                            "__defaults": [
                                "ip",
                                "protocol"
                            ],
                            "external": 443,
                            "internal": 443,
                            "ip": "0.0.0.0",
                            "protocol": "tcp"
                        }
                    ],
                    "privileged": false,
                    "publishAllPorts": true,
                    "readOnly": false,
                    "restart": "on-failure",
                    "rm": false,
                    "start": true,
                    "volumes": [
                        {
                            "__defaults": [],
                            "containerPath": "/var/run/docker.sock",
                            "hostPath": "/var/run/docker.sock"
                        }
                    ]
                },
                "outputs": {
                    "__meta": "{\"schema_version\":\"2\"}",
                    "attach": false,
                    "bridge": "",
                    "capabilities": null,
                    "command": [
                        "sh",
                        "-e",
                        "/docker-entrypoint.sh"
                    ],
                    "cpuSet": "",
                    "cpuShares": 0,
                    "devices": [],
                    "dns": [],
                    "dnsOpts": [],
                    "dnsSearches": [],
                    "domainname": "",
                    "entrypoints": [],
                    "envs": [
                        "CHALLENGE_DIR=/tmp/acme-challenges/",
                        "NGINX_VERSION=1.15.9",
                        "DHPARAM_SIZE=2048",
                        "PYTHON_VERSION=3.6.8",
                        "LETSENCRYPT_API=https://acme-v02.api.letsencrypt.org/directory",
                        "DEFAULT_HOST=true",
                        "CLIENT_MAX_BODY_SIZE=1m",
                        "PYTHON_PIP_VERSION=19.1.1",
                        "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                        "LANG=C.UTF-8",
                        "GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D"
                    ],
                    "gateway": "172.18.0.1",
                    "groupAdds": [],
                    "healthcheck": {
                        "interval": "10s",
                        "retries": 3,
                        "startPeriod": "10s",
                        "tests": [
                            "CMD-SHELL",
                            "pgrep nginx \u0026\u0026  pgrep python3 \u003e\u003e /dev/null  || exit 1"
                        ],
                        "timeout": "2s"
                    },
                    "hostname": "a1e0ebe86b71",
                    "hosts": [],
                    "id": "a1e0ebe86b71768a2b2804ae476a1efec31fd03e68eefec13731b46f1db79182",
                    "image": "sha256:4cee36ea92e54abe51d1c078944b213614d71c5458106779e43fcedda6e6edff",
                    "ipAddress": "172.18.0.9",
                    "ipPrefixLength": 16,
                    "ipcMode": "private",
                    "labels": [],
                    "links": [],
                    "logDriver": "json-file",
                    "logOpts": {},
                    "logs": false,
                    "maxRetryCount": 0,
                    "memory": 0,
                    "memorySwap": 0,
                    "mounts": [],
                    "mustRun": true,
                    "name": "mlcraft-nginx-f687c93",
                    "networkDatas": [
                        {
                            "gateway": "172.18.0.1",
                            "ipAddress": "172.18.0.9",
                            "ipPrefixLength": 16,
                            "networkName": "mlcraft_dev_network"
                        }
                    ],
                    "networkMode": "default",
                    "networksAdvanced": {
                        "4dabf18193072939515e22adb298388d": "1b47061264138c4ac30d75fd1eb44270",
                        "ciphertext": "AAABALU5ZsMMrT4F2g3vzEuxOWfpbA3XUt8spPCO8tQxGgcub3GughA+nFXUXYJPgYIMlD5AKKR293upgg6ltgjSuff5BXXKDbSBWje0PgxCpdssR/DLFdrMy4Y0owlIoCnwxPOhdNu0O61dIuX8H7ZDqowSvQ=="
                    },
                    "pidMode": "",
                    "ports": [
                        {
                            "external": 80,
                            "internal": 80,
                            "ip": "0.0.0.0",
                            "protocol": "tcp"
                        },
                        {
                            "external": 443,
                            "internal": 443,
                            "ip": "0.0.0.0",
                            "protocol": "tcp"
                        }
                    ],
                    "privileged": false,
                    "publishAllPorts": true,
                    "readOnly": false,
                    "restart": "on-failure",
                    "rm": false,
                    "shmSize": 64,
                    "start": true,
                    "sysctls": {},
                    "tmpfs": {},
                    "ulimits": [],
                    "user": "",
                    "usernsMode": "",
                    "volumes": [
                        {
                            "containerPath": "/var/run/docker.sock",
                            "fromContainer": "",
                            "hostPath": "/var/run/docker.sock",
                            "readOnly": false,
                            "volumeName": ""
                        }
                    ],
                    "workingDir": "/app"
                },
                "parent": "urn:pulumi:dev::mlcraft-services::pulumi:pulumi:Stack::mlcraft-services-dev",
                "dependencies": [
                    "urn:pulumi:dev::mlcraft-services::docker:index/remoteImage:RemoteImage::nginx_image",
                    "urn:pulumi:dev::mlcraft-services::docker:index/network:Network::"
                ],
                "provider": "urn:pulumi:dev::mlcraft-services::pulumi:providers:docker::default_2_1_1::ee5f323d-c560-4e0b-a8ee-0411793ed960",
                "propertyDependencies": {
                    "capabilities": null,
                    "envs": null,
                    "image": [
                        "urn:pulumi:dev::mlcraft-services::docker:index/remoteImage:RemoteImage::nginx_image"
                    ],
                    "networksAdvanced": [
                        "urn:pulumi:dev::mlcraft-services::docker:index/network:Network::"
                    ],
                    "ports": null,
                    "privileged": null,
                    "publishAllPorts": null,
                    "restart": null,
                    "volumes": null
                }
            },

@ifokeev
Copy link

ifokeev commented May 17, 2020

As I understand, docker.Container (from pulumi-docker) returns only Outputs regarding pulumi/pulumi#2659. So how to merge inputs and outputs and not getting these different states?

@emiioan
Copy link

emiioan commented Jan 13, 2021

@mattsleiman-discovery for Fargate task definition I was able to fix this by providing default health check values, for the rest like CPU and memory I didn't mention it as for Fargate its not mandatory and it does not replace container definition anyway, the only values for me that kept replacing containerdefinitions was heathCheck values:

healthCheck: {
  command: ["CMD-SHELL", "curl -f http://localhost/ || exit 1"],
  retries: 3,
  timeout: 5,
  interval: 30
},

@lukehoban
Copy link
Member

This is effectively due to hashicorp/terraform-provider-aws#11526 in upstream. The normalization used in the diff implementation could in principle handle all of the topics raised here:

  1. Normalizing away env vars set to undefined values
  2. Normalizing user missing or set to default value
  3. Normalizing away health check default values

@lukehoban lukehoban added the kind/bug Some behavior is incorrect or out of spec label Nov 19, 2022
@t0yv0 t0yv0 added the bug/diff kind/bug related to Pulumi generating wrong diffs on preview or up. label Jul 3, 2023
@t0yv0 t0yv0 added the awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). label Jan 10, 2024
@t0yv0 t0yv0 added the service/ecs ECS related things label Mar 28, 2024
@t0yv0 t0yv0 mentioned this issue Apr 26, 2024
5 tasks
@t0yv0 t0yv0 closed this as completed in 896ae14 Jun 28, 2024
@pulumi-bot pulumi-bot added resolution/fixed This issue was fixed labels Jun 28, 2024
@mjeffryes mjeffryes added this to the 0.107 milestone Jul 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). bug/diff kind/bug related to Pulumi generating wrong diffs on preview or up. kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed service/ecs ECS related things
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants