-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimise Scenario #1434
Comments
This indeed seems like an obvious win. Presumably we can just pass in
Both of these are the positional/keyword argument processing. I wonder if we can find a tidy way to have newer Python use the built-in support and only fall back to this on old Python.
I wonder if this is also the same issue: lots of log lines and each one involves creating a |
Yeah, this is probably the single biggest win. I made this change locally when running Happy for us to spend a few hours of rainy-day time looking at the other bottlenecks too. (Let's just be careful not to get too carried away on the long tail...) |
Alex Batisse mentioned that Scenario tests take much longer and the culprit is YAML? |
I checked, and I do have libyaml bundled with/used by pyyaml. When I profiled my tests, I noticed that the tests were spending a long time serializing yaml in Mind you, this is but a quick and dirty experiment, but here is what I tried:
This yielded a ~15% speed increase for my test file |
Some initial times (best of 3), using traefik-k8s (branch
|
@Batalex what repo (and branch, if appropriate) are you testing against? I'll use it as a second benchmark so that we're checking against multiple types of tests. |
I think I was using the latest release (7.0.5) because I directly edited the files in the virtual env between two sessions during the sprint. I don't have this venv around any more, but the project itself has |
Hey @Batalex, sorry for being unclear. I meant what test suite are you running (one of the data platform repos I assume), so I can run against that one too with changes to make sure that they do improve things. |
Ah, sorry, I was quite slow-witted yesterday. I was testing on https://github.com/canonical/kafka-k8s-operator (branch |
The unit testing suite for traefik is probably the largest scenario test battery around.
Today I ran it and I realized it took over a minute and a half to complete, so I decided to profile it.
This is the result:
to produce the profile, run with this tox env:
There are some obvious candidates for optimization:
State.__new__
. Can we do something about that?juju-log
takes so long?profiling scenario's own test suite yields a very similar picture:
The text was updated successfully, but these errors were encountered: