-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelizing pytest #206
Parallelizing pytest #206
Conversation
self.bayesian_optimizer_factory = BayesianOptimizerFactory(grpc_channel=self.optimizer_service_grpc_channel, logger=self.logger) | ||
max_num_tries = 100 | ||
num_tries = 0 | ||
for port in range(50051, 50051 + max_num_tries): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, questions ...
Is this only necessary when running tests in parallel?
Or is it not cleaning up after itself? Or maybe some delay after the teardown is necessary?
Can a single remote optimizer not handle requests from multiple clients?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's so that multiple services can run in parallel. Having one handle multiple requests would work too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*one service - each test instantiates its own optimizer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, makes sense.
The teardown changes (specifically closing the client channel explicitly) may still be necessary in the other one. I think I only made the change in the one that was throwing errors at the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I recall correctly it leaves a socket open that causes conflicts later on when it goes to reuse it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it would only happen on the server side. Is there any repro that would tell us when we've fixed the bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, the command to execute them got dropped in the commit message (-k test_echo
is optional - it's there just to try and keep the repro as simple as possible and works mostly because I added a second test_echo2
- the bug only shows up after the second test run since there needs to be cruft left over from the first):
# pytest -svxl -k test_echo source/Mlos.Python/mlos/unit_tests/TestBayesianOptimizerGrpcClient.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome, thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple minor suggestions.
source/Mlos.Python/mlos/Optimizers/unit_tests/TestBayesianOptimizer.py
Outdated
Show resolved
Hide resolved
source/Mlos.Python/mlos/Optimizers/unit_tests/TestBayesianOptimizer.py
Outdated
Show resolved
Hide resolved
…mizer.py Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
…mizer.py Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't look into the issue @bpkroth pointed out in detail but otherwise looks good.
…com/byte-sculptor/MLOS into 2020/december/parallelizing_pytests
This PR:
-n auto
argument to make multiple tests run in parallel (num parallel jobs = num cores)