-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execution segments for partitioning work between k6 instances #997
Comments
I've mostly implemented this, and it seems that's it's working as intended for now, pending more tests... 😄 I actually thought of a more accurate way to split work between different instances with execution segments that has the same nice properties as the one described above, and I've documented it at the bottom of this post. But more importantly, I've realized that there is one tricky caveat we have to be wary about when partitioning tests between multiple instances, so I'll start with that: Caveat when splitting work between instancesLet's say that a certain test with multiple different schedulers has import { sleep } from "k6";
export let options = {
execution: {
per_vu_iters: {
type: "per-vu-iterations",
vus: 5,
iterations: 5,
maxDuration: "5s",
gracefulStop: "0s",
},
shared_iters: {
type: "shared-iterations",
vus: 5,
iterations: 25,
maxDuration: "5s",
gracefulStop: "0s",
},
constantrate: {
type: "constant-arrival-rate",
rate: 2,
timeUnit: "1s",
duration: "5s",
preAllocatedVUs: 10,
maxVUs: 10,
gracefulStop: "0s",
startTime: "5s",
},
},
};
export default function () {
sleep(0.5 + Math.random() * 0.5);
} With a normal 100% execution, this test will run with 10 MaxVUS for 10 seconds in total (notice the The reason for this is pretty simple and intuitive once you've grokked things. Basically it's the result of the combination of the following 3 statements: 1) different schedulers can run simultaneously, 2) VUs are indivisible entities, 3) VUs are reused between schedulers when it's safe to do so, i.e. when their execution times are not overlapping. In the example above,
As you can see, both This isn't a huge deal, I'm just mentioning it as something to keep in mind when we're splitting execution between multiple k6 instances. Once we generate an execution plan (i.e. how many max VUs would be needed at any given time in the test execution), we can't correctly scale those aggregated values. We could scale them if we only want a rough estimate, but for precise MaxVU calculations, we should generate an execution plan only after we've applied the execution segment value. More accurate algorithmAs mentioned in the beginning, I think I have a better algorithm for calculating values in different segments. It's pretty similar to the one described in the first message and has the same nice properties, but it should be slightly more accurate and slightly simpler. The original algorithm for scaling an indivisible integer values was basically
See how suddenly the 20% segment has 3 VUs, while the 25% segment has only 2. It's not the end of the world, these error won't ever get bigger than But they're also unnecessary 😄! Instead of using
Here are the same previous examples, but with the new formula:
Same nice properties of the old algorithm, but even better! Each segment's end value is still consistently and correctly calculated based only on the |
After this comment yesterday, I though a bit more about how data could be partitioned between multiple independent k6 instances without synchronization between them. The data would need to be something with a known discrete size in the init context, so that the scheduler VUs/iterations could be synchronized by it in some way. But other than that, it could be anything - bytes in a file, lines in a text file, lines in a CSV file, JSON objects, JSONPath/XPath query results, etc. Then there are a few possible ways to distribute (i.e. iterate over) these discrete pieces of data between the iterations. Say we have 10 pieces of data and we want to spread them them in a 50%, 30%, 20% distribution:
Whether this striping is something that should actually be a responsibility of the data segmentation functions in k6 is still unclear to me, but it seems like maybe it should be... On the one hand, users should be able to achieve something pretty similar by just randomizing the source data. On the other hand, that won't work with streaming data sources like large files. And it's unclear how practical it would be when users want to iterate over the data set multiple times... |
Last Friday, while thinking about #1299 (comment), I had the seed of an idea how we can implement something like striped data partitioning. More importantly, this could also probably help us with the distribution of iterations between instances for the arrival-rate executors, so that they aren't grouped closely together (#1007 (comment), #1299). Sorry for the long wall of text... again... but I wanted to describe my thought process, since even though I think this solution should work, I'm hopeful that someone else can spot something simpler that I've missed. The basic idea is that with the arrival-rate executors, both the constant and the variable one, we can look at the iterations as a sequence of discrete events. Similarly, for data partitioning, we can look at the data entries as a sequence. And we don't really need to know how long these sequences are, they can even be quasi-infinite, giving us the nice option of easily looping through small data files multiple times! Or not bothering to calculate precisely the exact number of iterations a As I've shown above, that's very easy to do if you know the total number of elements and don't mind big sequential blocks of iterations/data. Let's say we have 3 execution segments, ES0=(0 - ⅓], ES1=(⅓ - ½], ES2=(½ - 1], and we need to split a file with 60 data entries... Or, alternatively, we need to execute 6 iterations per second over 10 seconds, i.e. 60 total iterations. Let's name the sequence of iterations The simplest (non-striped) way of splitting the iterations would cause ES0 to execute iterations Ideally, we'd like all instances to process data/iterations simultaneously. Additionally, for the arrival-rate executors, we'd like the iterations of each instance to be as spread-apart as possible, so we can use fewer VUs and so that the load is spread more evenly across the load generators. So, in our example above, the last instance (responsible for 50% of the iterations) shouldn't process So, my initial idea was that we can turn the information we have, the execution segment start and finish values, into a quasi-sequence as well. Only this sequence would be finite and made out of booleans, and we would use it as a circular "mine" / "not mine" filter for the iterations/data from the main sequence. Because it's finite and circular, the main sequence of data/iterations can even be infinite. And each instance, using its own execution segment, would generate its own "mine" sequence independently, and have it be disjoint with the "mine" sequences of all other instances, totally reproducibly. This would have been easy to do if all execution segments had the same denominator... That is, instead of But since we don't know the Instead, I think I figured out a way to get 95% of the benefits of perfectly striped data partitioning by just adding some deterministic pseudo-randomness in the mix. I remembered that you could use prime numbers to iterate through all cells of an array exactly once in a pseudo-random (but deterministic) manner! This is a short post I found on the topic, though there's probably a better explanation (or alternatives) in some algorithms book. This would have been sufficient if we had a single denominator for execution segments, but we don't, so we have to dig a little deeper... Essentially, because we don't know the total number of iterations/data entries, and we don't want each instance to know all execution segments, we need to have a way to evenly map any integer number
A simple demo of the concept is to just pick a random prime, and a random big-enough window size (
Here's a short demo of the concept: package main
import "fmt"
const (
randomPrime = 17
window = 20
start = 1
)
func main() {
for i := 1; i <= 500; i++ {
if i%window == 1 {
fmt.Print("\n")
}
fmt.Printf("%d/%d, ", (start+i*randomPrime)%window, window)
}
} the result would look like this:
This specific algorithm above is probably not be the best approach, since it's not very random or even, but it illustrates the point. The next "level", which is what we probably want to use, is something like a LCG (Linear congruential generator). It's not going to give us the perfect even distribution (e.g. a 50% segment executing exactly only every second iteration), but assuming the function is fairly random, it should be pretty close. In any case, whoever ends up implementing this would need to read up some more on the topic, and thoroughly test things... Here are some potential resources on similar issues, and though I still haven't read them fully, a book or paper on the topic would probably be even better:
|
As a more concrete example of the type of pseudo-randomness we need, I found a convenient Go implementation of a full-cycle LFSR: https://github.com/taylorza/go-lfsr (some discussion and other related links in here ). So, this might not be the best algorithm for our use case (though it's probably good enough), and we'd definitely want to use at least the 16-bit version (and who needs more than 64k anyway...), but here's a short demo with the 8-bit LFSR implementation: package main
import (
"fmt"
"github.com/taylorza/go-lfsr"
)
func main() {
rnd := lfsr.NewLfsr8(13)
for i := 1; i <= 255*3; i++ {
next, nl := rnd.Next()
fmt.Printf("%d,", next)
if nl {
fmt.Println()
}
}
} it produces something like
|
This is a deep rabbit hole 😅 Some benchmarking and a lot of further investigation is definitely needed, but part of the sequence of the previous 8-bit LFSR ( package main
import (
"fmt"
"log"
"modernc.org/mathutil"
)
func main() {
rnd, err := mathutil.NewFC32(0, 255, true)
if err != nil {
log.Fatal(err)
}
rnd.Seed(13)
for i := 1; i <= 3*256; i++ {
next := rnd.Next()
fmt.Printf("%d,", next)
if i%256 == 0 {
fmt.Println()
}
}
} which, even with a cycle of only 256, seems fairly random to my monkey brain:
|
To make things simpler to understand, I'll try to give an example. Let's use the execution segments from the example above, Inst0 having execution segment The basic idea is that each of the three instances would independently generate the same pseudo-random numbers while looping from
If I have calculated everything correctly:
This isn't always going to hold, for example ratios at half the iterations ( In any case, I'd very much appreciate if someone can think of some simpler approach to this problem. The basic idea behind partitioning work is that, if I run a load test from a single machine and if I run that same load test spread across multiple machines, I shouldn't notice any difference at the receiving end (i.e. SUT) or in any charts of the active VUs or iterations/s (barring things like network speed/latency/hardware differences, of course). |
Hmm thinking some more about this, I'm not sure that we can avoid something like this predictable-pseudo-random approach if we want something like striped data/iteration partitioning, even if each instance had the same denominator for its execution segment (or some other extra synchronization). I think that, in order to get near-perfect distribution (if that's even possible for all cases, which I think can be done, but I'm still not 100% sure), we'd have to send the list of all execution segments for a test run to each instance, besides its own segment. This is probably the minimum requirement, since it seems to me that we must have some sort of deterministic bin-packing-like algorithm (though that's probably not the correct term in our case) that partitions the segments... That algorithm would probably be relatively easy in our use case (the simplest one I can think of involves an array with size To illustrate why, let's imagine 3 instances with the following segments: |
This is a proposal of an implementation approach/algorithm for distributing "work" between multiple semi-independent k6 instances in a distributed/cloud execution scenario. I think I have a clear idea how that could work and I'd appreciate any feedback on it. Especially if there's something fishy... I plan to have this supported in the new schedulers, and unit-tested fairly extensively, but still.
Requirements
We need a way to reproducibly, correctly, consistently, and efficiently distribute work between multiple k6 instances with the minimum amount of coordination and communication between them, and with no central scheduler.
There shouldn't be any off-by-one errors, rounding mistakes, VU numbers jumping up and down, or any other scheduling inconsistencies. Basically, launching the same test multiple times should produce identical scheduling outcomes - there shouldn't be any randomness involved with the way we distribute the work (i.e. VUs/iterations/iterations-per-second/etc., or even things like JSON/CSV/XML records) across the different instances, load zones, distributions, etc.
Proposal
So, the basic idea of my proposal is this. We should use rational numbers [1] to divide the test execution work in
(from, to]
segments [2] for each instance, according to the user-specified distributions and any other placement constraints.from
andto
are rational numbers between0
and1
- they can be percentages, as well as fractions like1/3
or decimals0.25
. As an example, if we want to split a single test equally between 3 different k6 instances, here's how that would look like:If each k6 instance received only the percentage/ratio of its share of the work (e.g.
1/3
.25%
, etc.), that won't be enough to cover the requirements listed above. To have correct distributions, we'd also either need an external scheduler. Or we'd need each k6 instance to know the percentage details for other instances involved in the test, so it can independently and correctly figure out its own workload portion, accounting for things like rounding and off-by-one errors. And even then, calculating all of those things will be somewhat annoying and wasteful...Instead, if each k6 instance receives a
(from, to]
segment of the work [3], it should be able to correctly, efficiently and consistently calculate the amount of work it should be doing (e.g. how many VUs it should have running) at any given moment with absolute precision. Each instance won't need to know what the work shares for the other k6 instances are, or even how many other k6 instances are actually involved, just the size and position of its own slice of the work "pie". And for the whole duration of the test, the number of VUs/iterations-per-second/etc. scheduled among all of the k6 instances should be exactly equal to the amount if the test was scheduled to run locally, on a single machine! Magic ✨ (the good kind 😄)Calculations
Here's how the math works... I think 😅 I'd be very grateful for any corrections here, since it's kind of important that I'm not mistaken 😄
Since the whole interval is from
0
to1
(i.e.0%
-100%
), if each instance knows its own(from, to]
segment, then it can easily calculate the segment that encompasses all of the other instances "before" it, however many those are. Let's call thatprevSegment
- it will either be empty (iffrom
is0
), or it will be(0; from]
. I think that this is all that's necessarily for an instance to calculate with absolute precision every scaling of its VUs/iterations/iterations-per-second/etc. it needs to do. Let's use VUs, here's how we can calculate how many VUs an individual instance should have ifTotalVUs
is the number of VUs that should be active for the whole test:Examples
Here's a table with an example. Say, we have 7 VUs that we want to split equally between 3 instances:
See how to calculate the end value (last column) for each segment doesn't require any information besides the
(from; to]
values for that specific segment. And yet, when we sum the values in the last column, we'd get 7 - precisely the number of VUs we're distributing!Another example, how to fairly split 7 VUs between 4 instances this time:
Each k6 instance can precisely scale any arbitrary integer (i.e. indivisible, like VUs or instances or record numbers) or fraction (iterations/second) value in a constant time, with just a few simple arithmetical operations! 🎉
Another example, dividing 11 VUs in a 20%, 25%, 45%, 10% distribution:
🎉 😄
And as a demonstration why the position of the segment is important, let's switch the order of the percentages in the above example, i.e. let's calculate the 11 VUs spread across a 10%, 45%, 20%, 25% distribution:
Of course, such differences are usually negligible when hundreds or thousands of VUs are involved. The important guarantee this algorithm gives us is that when we partition the workload into static segments, the calculations for each segment will always be consistent - the same inputs will be scaled to the same outputs. There shouldn't be any of the inconsistencies we've seen with VU numbers jumping up and down... And, since the calculations are so simple, they will be both fast and will save us from headaches from wondering how we should round stuff 😄
This approach is much saner than some we've previously discussed for complicated situations. Imagine the following scenario: linearly scaling up VUs from 0 to 6 over 1 minute and 10 instances (
startVUs=0, stage.target=6, stage.duration=60s, instances=10
). How many VUs should each instance be running at time=30s? 😄 This is easy to calculate when you have a central scheduling authority, but it becomes quite annoying to do if each k6 instance has to individually do it. On the other hand, withfrom:to
execution segments, it'sfloor( (3 * from) % 1 + 3 * (to - from) )
, with rational math of course 😎Data partitioning
I initially though that we should disable the
shared-iterations
scheduler in the distributed/cloud execution, since we'd have to split the iterations between instances and different instances may finish at very different times, depending on latency, capabilities or load.But, given that we can support that execution mode via the execution segments described above with no extra effort, I no longer feel that way 😄 we may emit a warning that "the test will be executed on multiple machines and different iteration groups can finish at different times due to network latency", or something like that, but we should still support shared iterations in a distributed/cloud execution environment 🙂
Besides being both useful to some users, and easy to do with execution segments, they can serve another purpose 🙂 I thought some more about how we can implement the use case of "parsing a huge CSV/JSON/XML/etc. file using each entry from that file exactly once" in k6 - both locally in a single k6 instance and spread across multiple different instances.
I think the answer may be the following combination of things:
shared-iterations
scheduler... thus my desire to support it 😄tar
one [5]And... that's it, I think 😄 🎉 If we have those 3 parts above, we can get all of the rest in a user-friendly, natural and composable way. The initial reading of the file and finding the correct starting place for each k6 instance (according to its own segment) could take some time, even with streaming parsers - if the file is huge, or if we do some processing like XPath/JSONPath/etc. And finding the total count of data entries in that file may also take a few seconds. But that's fine, in both cases, since it will happen in the init phase 🙂 And we'll never actually need to load the whole file in memory!
Here's a made-up code demo with made-up CSV APIs:
Notes
[1] Rational numbers are pretty straightforward in Go, they even have a cool name -
big.Rat
😄[2] Not sure if "segments" is the best term. Other potential options: intervals, portions, shares
[3] UX bonus - users can use the
--execution-segment
option as a tool for quick and dirty scaling down of a load test! For example, if they want to run the test with only 10% of the load without fiddling with all of the script options, just run it withk6 run --execution-segment 10% script.js
(equivalent to--execution-segment="0-10%"
)[4] There are Go libraries for those like https://github.com/jszwec/csvutil for streaming CSV parsing. And for JSON and XML, the Go stdlib support may be enough: https://golang.org/pkg/encoding/json/#Decoder and https://golang.org/pkg/encoding/xml/#Decoder. Though, as we've discussed in other issues, XPath/JSONPath may be more problematic.
[5] In my head, the ideal new k6 archive format would be something that:
.zip
files have an actual file system,.tar
files are just a stream of chunks...Go has pretty good native support for archives, both in the standard library, and via community libraries, so we should have plenty of good choices here...
Connected issues:
The text was updated successfully, but these errors were encountered: