-
-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: allow start container with reuse in different test package #2247
bug: allow start container with reuse in different test package #2247
Conversation
✅ Deploy Preview for testcontainers-go ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@mdelapenya Hi! Please take a look, I've encountered the same issue, refreshed the code from #1572 |
@mdelapenya Are there some flaky tests? I just successfully flew the checks in my fork) https://github.com/Alviner/testcontainers-go/actions/runs/7993042944/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Just to let you know, we are thinking about revamping how the reusable containers work, in a more consistent and deterministic manner. So this code could change or be removed. But until that, this is perfectly fine, thanks for your time contributing it!
BTW I'm renaming the PR to bug, as this is fixing the existing behaviour. |
// default hooks include logger hook and pre-create hook | ||
defaultHooks := []ContainerLifecycleHooks{ | ||
DefaultLoggingHook(p.Logger), | ||
defaultReadinessHook(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question @Alviner / @mdelapenya : is there any good reason why the defaultReadinessHook()
should be added also for a container being reused?
Reason I'm asking is that I noticed quite some overhead when changing testcontainers from v0.28.0
to v0.29.1
and was able to 'bisect' the issue to this PR.
As we're heavily relying on reused containers, this overhead is quite noticeable:
$ time make integration-test
...
real 0m4.555s
user 0m10.373s
sys 0m1.093s
before. Opposed to now:
real 0m9.177s
user 0m8.254s
sys 0m1.094s
When the container is initially created (and due to the mutex) it should have been fully started and the 'readiness' state was already checked. So from my perspective I don't see a reason to keep checking this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question @Alviner / @mdelapenya : is there any good reason why the
defaultReadinessHook()
should be added also for a container being reused?Reason I'm asking is that I noticed quite some overhead when changing testcontainers from
v0.28.0
tov0.29.1
and was able to 'bisect' the issue to this PR.As we're heavily relying on reused containers, this overhead is quite noticeable:
$ time make integration-test ... real 0m4.555s user 0m10.373s sys 0m1.093s
before. Opposed to now:
real 0m9.177s user 0m8.254s sys 0m1.094s
When the container is initially created (and due to the mutex) it should have been fully started and the 'readiness' state was already checked. So from my perspective I don't see a reason to keep checking this.
When you run tests from different packages, they run in separate processes with a single common parent. Therefore, if you reuse containers in these packages, you need to wait for the readiness hook, which effectively triggers waitFor in most situations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, ok. Then the mutex would of course no longer guarantee readiness of the Container.
But then I do think a solution for another bug I found (#2445) might resolve it to some extend. Because if there's a way to (explicitly) 'scope' a reusable container to the SessionID, then it's guaranteed its also always the same mutex instance. But I'm waiting for some feedback there to do a PR :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you all for bringing this topic, and for being so proactive in the discussion 🙇
We are working in a better reusable experience for testcontainers-go. At the moment, it was initially built as an experimental feature that was immediately used when landed, so now it's part of the public API 😅 so difficult to remove without causing trouble to users.
The new experience will be improved and more robust and provided through Testcontainers Desktop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx for the additional insight.
But does that mean no changes are expected or even approved just for the library? I don't think it needs to be removed, but there must be a way to both fix the existing behaviour and not break others...
I surely hope it would not require using the TestContainers desktop app since that would not work in our CI pipeline. And currently the "reuse" flag is causing actual issues: #2445 (comment)
What does this PR do?
When creating reusable container from different test binaries in parallel, you can get "name conflict" from docker daemon, because different binaries don't have a shared mutex
Instead of exiting with error when receiving "name conflict", we will try to wait for the container creation to complete using the ContainerList method and call ready/started hooks after.
Why is it important?
This PR allows a reusable container to be used in parallel test binaries run from the "go test ./..." command.
Related issues