-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better distribution of work with execution segments? #1308
Comments
@na-- What do we do in the Do we say that one of the instances with 1 preallocated VU can not spin more VUs but the other can? |
Not sure I fully understand your question... You're asking what happens if we have a
|
This actually somewhat ties with the issues we're investigating in #1296 (comment) - I sincerely hope that reverting to the old algorithm would solve the issue, otherwise we might have a problem with this as well... |
I find this separation to be the most ... useless one, unless you also think that then instance 2 should also do iterations. And what you are basically saying is that the separation of the iterations/s between instances should be done based on
And for example with
instead of if we decide that we separate preallocatedVUs based on the segments and then based on those we separate the rest:
|
Hmm yeah, that seems to be the most fair approach. Consider the case where you have |
that seems like some corner case where you basically say that all VUs will be allocated during the execution and that is what we expect to happen ... While I don't think that that is what To be honest your approach is easier to implement (or at least I have problems with implementing mine from the current code). But this really shouldn't matter in this case. On a related matter(constant-arrival-rate), still working with the above example and saying that we separate based on maxVUs: Do we want to have the same fix as with variable arrival rate where when we separate it between instances we don't do iterations at the same time:
or:
Because If this is true I would like to merge the previous two PRs now and then work from them as otherwise we will end up with one HUGE PR in the end |
This is not fixed yet, export let options = {
execution: {
constant_arr_rate: {
type: "constant-arrival-rate",
rate: 2,
duration: "10s",
preAllocatedVUs: 2,
maxVUs: 2,
},
},
}; And you run the script in 3 instances with |
Never mind, I'm an idiot, I tested this without a segment sequence... |
Unfortunately because of the way the ExecutorConfig interface is designed I can't just cache some of the results which will result in some probably unneeded amount of calculation. This will need either a change to the interface or for ExecutionTuple to cache the results of GetNewExecutionTupleBasedOnValue, but I find it unlikely this to be noticeable outside of extreme examples closes #1308
Unfortunately because of the way the ExecutorConfig interface is designed I can't just cache some of the results which will result in some probably unneeded amount of calculation. This will need either a change to the interface or for ExecutionTuple to cache the results of GetNewExecutionTupleBasedOnValue, but I find it unlikely this to be noticeable outside of extreme examples closes #1308
The discussions surrounding #1295 and #1307 made me realize that we might have an issue with work partitioning.... Some of the executors have 2 different attributes that have to be distributed when we partition them with execution segments (#997) - the number of VUs they have at any given moment, and the work they have to do. The
per-vu-iterations
,constant-looping-vus
, andvariable-looping-vus
don't have this problem, since for them, the only important thing we need to segment/partition is the number of VUs - the work each VU has to do isn't affected, it's constant.But for
shared-iterations
,constant-arrival-rate
, andvariable-arrival-rate
, things are a bit different. Say that we have the following configuration:This is the execution segment that has no VUs, but a single iteration... Similarly, with the
arrival-rate
executors it will probably be even easier to end up having a segment with work but no VUs... So, I think we may need to have a 2-tier partitioning for these executors:So, in the above example, this is how the iterations and VUs are currently partitioned if we split the work into thirds, and also how I think they should:
For the arrival-rate executors things like this would probably happen even more often, since iters/s are practically infinitely divisible. Say that we have 2 iter/s with 2 VUs, and we again want to split that execution into thirds:
We'd have something like this:
These changes should be fairly easy and efficient to achieve with the execution segments, though of course, serious testing would be required to validate it...
The text was updated successfully, but these errors were encountered: