Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dockercompose: fix dc_resource new_name parameter causing depends_on problems (#6325) #6327

Merged
merged 2 commits into from
Mar 15, 2024

Conversation

kinland
Copy link
Contributor

@kinland kinland commented Mar 5, 2024

This solves the issue I reported in #6325 by modifying the loop in the renameDCService function to also patch s.dc[projectName].services[n].ServiceConfig.DependsOn[name] and s.dc[projectName].services[n].Options.resourceDeps[name]

I also made some temporary local variables because there was a lot of repeated s.dc[projectName] and s.dc[projectName].services and it was hurting readability, especially after I added the changes to the loop

Copy link
Member

@nicks nicks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! can you add a test for the new behavior?

index = i
break
} else if sd, ok := services[n].ServiceConfig.DependsOn[name]; ok {
services[n].ServiceConfig.DependsOn[newName] = sd
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out of curiosity, does it really change anything to rewrite the Docker Compose config?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure which (or both) is the cause of the problem I was having. IIRC, only one seemed to matter, but I did both out of abundance of caution

break
} else if sd, ok := services[n].ServiceConfig.DependsOn[name]; ok {
services[n].ServiceConfig.DependsOn[newName] = sd
services[n].Options.resourceDeps = []string{newName}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm...i don't think this is right if resourceDeps has other elements?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good call. I'll fix that when I add the tests (probably in a day or two)

Copy link
Contributor Author

@kinland kinland Mar 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are resource dependencies ordering nondeterministic somehow? I changed this to

Suggested change
services[n].Options.resourceDeps = []string{newName}
if rdIndex := slices.Index(services[n].Options.resourceDeps, name); rdIndex != -1 {
services[n].Options.resourceDeps[rdIndex] = newName
}

I have created several test case permutations; sometimes they all pass, sometimes one or two fail, and it's even inconsistent on which one fails. I'm not sure why it would be flaky.

In my test, allNames is a map[string]string from the old names to the new names, including any names that weren't changed, i.e. allNames["bar"] might be either bar or bar2 depending on the test case.

If a test failure occurs, it happens on this line:
f.assertNextManifest(model.ManifestName(allNames["bar"]), resourceDeps(allNames["db"], allNames["foo"]))

Not equal: 
        	            	expected: []model.ManifestName{"db", "foo"}
        	            	actual  : []model.ManifestName{"foo", "db"}

Since I'm replacing at the same index, I don't understand what might be causing the order to change. In fact, this sometimes happens in a test where the service itself is renamed, even when the dependencies aren't renamed.

My best guess is that this is due to code somewhere iterating over a map and "the iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next." I tried replacing such a loop in my test generation code with iterating over a sorted list of keys, but this didn't change the behavior any. So I'm guessing perhaps when the docker compose file gets read in, data is stored in a map before it is put into a slice, before it even gets to my changes, but there's a lot of code in the Tilt codebase and it's hard to track something like that down.

If this is the case, it's easy enough to just modify the assert.Equal(f.t, opt.deps, m.ResourceDependencies) in tiltfile_test.go to sort before comparing, but I don't want to do this if the problem is something I'm overlooking in my changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you push the version of the code that's failing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like it reproed in your CircleCI on the first time, without needing a re-run.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you try rebasing on #6335 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cherry-picked that commit & ran my tests 5 times in a row w/o clearing cache and they all passed!

If you don't mind a force-push, I can push a rebase later when I have a bit more time, but I'm confident that solved the test flakiness, since I never managed twice in a row before.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went ahead and did that, since it seemed that was probably what you wanted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like the test that failed is probably a flaky test - it's k8s related, and I didn't touch anything that wasn't docker specific

kinland added 2 commits March 14, 2024 17:47
…problems (tilt-dev#6325)

Signed-off-by: Kinland <16787581+kinland@users.noreply.github.com>
… commit

Signed-off-by: Kinland <16787581+kinland@users.noreply.github.com>
@nicks nicks self-requested a review March 15, 2024 13:14
@nicks nicks merged commit 0cd892c into tilt-dev:master Mar 15, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants