-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement to arrival rate executors #1386
Comments
Not sure I like
I don't think we need this right now, since |
As mentioned in #1285 (comment), the initialization of unplanned VUs probably shouldn't happen in the On the other hand, we probably also don't want to initialize more than 1 unplanned VU at a time. So this feels like it should be a small "side-car" helper to these executors. And it will probably be easier to do with the "Less goroutines starts" implementation from #1386 (comment) Edit: somewhat connected issue to this: #1407 |
I started refactoring both arrival-rate executors to use a common execution code, but quickly hit various issues and decided to just copy-paste the fixes for #1386 (comment) (i.e. the newer commits in #1500) for now... 😞 We should still do it before we implement the features proposed above, but yeah, decided to leave it for 0.28.0 as well... Another thing we should do when refactoring this to have a common run framework is to mock time (#1357 (comment)). This would allow us to test everything in a non-flaky way, finally... And given that we'd need to plug different timers from the two executors, it should be pretty natural to do it in a way that's testable, 🤞 |
This was an initial attempt to solve the first point from #1514, but there were a few blockers: - #1499 - #1427 - #1386 (comment)
8c25045 here are my tries at refactoring the ramping arrival rate into something more ... split up, with the idea to use it in the constant arrival rate as well |
Sparked by this comment in the community forum I did some profiling with the following scripts: exports.options = {
scenarios: {
test1: {
executor: 'ramping-arrival-rate',
preAllocatedVUs: 100,
stages:[
{target:100000, duration:'0'},
{target:100000, duration:'6000s'},
],
},
},
};
exports.default = function () {} and exports.options = {
scenarios: {
test1: {
executor: 'constant-arrival-rate',
preAllocatedVUs: 1000,
rate: 100000,
duration: "6000s"
},
},
};
exports.default = function () {} Both doing empty iterations with There is still more investigation needed to figure out why having 100 VUs is so much better than having 1k, but |
I created a new issue for tracking the performance optimization: #1944 So this issue will just be left for discussing the time-bucketing and other improvements like refactoring the long and complicated |
All of the below might be relevant to other executors not mentioned
Less goroutines starts
As suggested in this comment it might be better if instead of constantly starting goroutines for each iteration, instead to have each VU in a "permanent" goroutine and send it iteration to start.
This will probably be more performant but has the downside that unlike the current approach doesn't guarantee that the least used VU will do the next iteration ... which might be something we care about a lot.
I propose we try it out and benchmark it .. if it is really a lot faster and has some amount of balancing - I am for using it.
Bucketing iterations
While discussing how accurate the proposed start of iterations we came to the idea that having a way to be ... "less" accurate is a good idea. A lot of times people have asked if they can start a number of iterations at the same time and it is also the way that probably some people will understand the arrival rate executors: you start 10 iterations at the beginning of each second. instead of you start 10 iterations over 1 second. The proposal is that we add yet another argument which is in what buckets we should start iterations (I guess we can also use it for variable looping VUs to start VUs in buckets :) ).
So :
Would mean that for the first 5 seconds instead of starting one iteration each 1s/50 = 20ms. We will start 5 iterations each 100ms (the time bucket).
Then instead of ramping up (linearly from 50 to 100), which usually would've meant that will be starting iterations every 20ms-10ms depending on how close to the end of the ramp-up we are we will start all the iterations for each 500ms at the same time every 500ms.
Finally instead of doing 1 iteration every 10ms(1s/100) we will start10 iterations every 100ms (the
timeBucket
).I wonder if this will not be easier with defining a number of iterations instead of time ... maybe both could be supported.
Maybe we should add another executor for this instead, as this will probably worsen the usual performance? Although it is important to note that the original proposal was to do this by default for 1ms so we don't need to sleep so much .. and possibly be nearly as accurate ... so all of those need to be tested :D
The text was updated successfully, but these errors were encountered: