-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
808/about executors #974
808/about executors #974
Conversation
And pages for dropped iterations and VU allocation
@na-- I am requesting your review first. I only request your review on the two new pages, related to dropped iterations and VU allocation. If you want to comment on the structural changes of the docs, feel free, but I really just want your expert opinion on the accuracy and usefulness. |
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
There's a version of the docs published here: https://mdr-ci.staging.k6.io/docs/refs/pull/974/merge It will be deleted automatically in 30 days. |
...wn/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/03 Dropped iterations.md
Outdated
Show resolved
Hide resolved
...wn/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/03 Dropped iterations.md
Outdated
Show resolved
Hide resolved
...wn/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/03 Dropped iterations.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...wn/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/03 Dropped iterations.md
Outdated
Show resolved
Hide resolved
...kdown/translated-guides/en/02 Using k6/14 Scenarios/01 Executors/05 constant-arrival-rate.md
Outdated
Show resolved
Hide resolved
...rkdown/translated-guides/en/02 Using k6/14 Scenarios/01 Executors/06 ramping-arrival-rate.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proofreading
src/data/markdown/translated-guides/en/02 Using k6/14 Scenarios.md
Outdated
Show resolved
Hide resolved
|
||
Different scenario configurations can affect many different aspects of your system, | ||
including the generated load, utilized resources, and emitted metrics. | ||
If you know a bit about how scenarios work, you'll both design better tests for resources and goals, and interpret test results with more understanding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understand "design better tests for resources and goals"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thinking was that if you understand scenarios better, you can:
- Make better decisions for your test goals, because certain scenarios correspond better to certain test design. To spike test a single component for raw throughput probably requires an arrival rate executor. To just see how quickly your system can churn through x number of iterations,
shared iterations
is a simpler choice. - Use resources better, because understanding not to use maxVUs means you'll use CPU cycles more efficiently.
Of course now I realize that this is an enormous amount of implied information.
Will change to:
If you know a bit about how scenarios work, you'll design better tests and interpret test results with more understanding.
...rkdown/translated-guides/en/02 Using k6/14 Scenarios/01 Executors/06 ramping-arrival-rate.md
Outdated
Show resolved
Hide resolved
@@ -0,0 +1,88 @@ | |||
--- | |||
title: VU allocation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"VU allocation" seems a bit misleading, since we are only talking about arrival-rate scenarios
title: VU allocation | |
title: Arrival-rate VU allocation |
or maybe even
title: VU allocation | |
title: Arrival-rate configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to think about this.
"Arrival-rate configuration" probably opens the door to more topics, like using options. That's not bad, but it may mean there's a better place to put this (not blocking for this PR). Is there any reason readers should know about non-arrival rate allocation? If so, maybe we could add it. If not, maybe it doesn't matter to document.
Not sure. It's a good point though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I've thought about it, I don't want to go with "Arrival-rate configuration" because that should include Graceful stop and maybe more, which opens a whole new round of content structure. It's nice to keep info atomic.
Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.
Either way, I choose these new titles, ranked by preference. You can pick and that's what will go with:
- VU allocation.
- VU pre-allocation
- Arrival-rate VU allocation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because that should include Graceful stop and maybe more
graceful stop is not specific to arrival-rate executors and already has its own dedicated page that explains it: https://k6.io/docs/using-k6/scenarios/graceful-stop/
Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.
Well, considering the configuration of non-arrival-rate executors is specified in terms of VUs, there isn't really anything complicated to explain there 😅
Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.
Is this not encompassed in VU Pre-allocation
? Basically, I'm looking for the shortest way to say the most in the most accurate way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"VU Pre-allocation" only makes sense to you because you already know it applies for arrival-rate executors. A new user won't know that fact and won't click on that menu entry at all, even if this is exactly the information they are looking for.
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
- `rate` determines how many iterations k6 starts. | ||
- `timeUnit` determines how frequently it starts the number of iterations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having these 2 in different lines, with a separate explanation for each, is more confusing than helpful in my opinion. The current disjointed explanation is both longer, more confusing and less correct than "k6 will try to start rate
iterations evenly spread across a timeUnit
(default 1s
) interval of time"
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
|
||
<Blockquote mod="attention" title=""> | ||
|
||
In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another problem - you mention maxVUs
here, but we haven't mentioned maxVUs anywhere before in this document... This was one of the reasons for my longer explanation that you discarded, it had a paragraph with a cohesive explanation for both preAllocatedVUs
and maxVUs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still kind of of the opinion that maxVUs
shouldn't be mentioned anywhere :-). What I'll do is just make the first admonition only about preAllocatedVUs
, and then a second admonition in the maxVUs
section. It's not very elegant, but it doesn't cram so much information together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, if I could rewrite history, maxVUs probably would not exist 😅 But now that it exists, we should try to make it as clear as possible how it function and why it might not be a good idea to use.
An admonition only for preAllocatedVUs
doesn't make sense, this is how cloud subscriptions normally work, i.e. no admonition needed just for it.
And if we want to tuck maxVUs
only at the end of the document, somewhat out of sight (which I don't necessarily mind), then we should only have an admonition there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nobody will read the text you have highlighted in the middle of that paragraph 😅 People will just see the two admonitions and get the very wrong impression that in the cloud we will charge them for preAllocatedVUs + maxVUs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Point taken. With 19784d4, it looks like this:
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
...arkdown/translated-guides/en/02 Using k6/14 Scenarios/00 About scenarios/02 VU allocation.md
Outdated
Show resolved
Hide resolved
Co-authored-by: na-- <n@andreev.sh>
…s/00 About scenarios/02 VU allocation.md Co-authored-by: na-- <n@andreev.sh>
- More explicit title - Double admonition - Join rate and timeUnit in one list item - More explicit example - Update list page for new page URI
@ppcano , na-- and I have done a pretty big job on the two new docs, and I don't think anyone has the desire to get to in to them anymore, but would you mind doing a quick review of the info architecture changes? There's no doubt that there's more work to do to organize this information, but I think it can wait for more PRs. |
- Whether VU traffic stays constant or changes | ||
- Whether to model traffic by iteration number or by VU arrival rate. | ||
|
||
Your scenario object must define the `executor` property with one of the predefined executors names. | ||
Along with the generic scenario options, each executor object has additional options specific to its workload. | ||
For the list of the executors, refer to the [Executor guide](/using-k6/scenarios/executors/). | ||
For the list of the executors, refer to [Executors](/using-k6/scenarios/executors/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MattDodsonEnglish I think we should list here all the executors. If not, readers might move to Concepts without getting the "general" idea of the different executor options.
I suggest adding something similar than the Executors table. For example:
- Shared iterations: a fixed amount of iterations are "shared" between a number of VUs.
- Per VU iterations: each VU executes an exact number of iterations.
- ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ppcano I deleted that list in #936. I didn't like duplicating content in this way.
I'll make a new PR to put it back in, maybe just as a summary:
You can configure executors in to distribute workload according to:
- Iterations. Either shared by the VUs, or distributed across them.
- VUs. Either a constant number or a ramping number.
- Iterations per second. Either constant or ramping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll make a new PR to put it back in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a comment with a suggestion
Could you please clarify a little bit regarding connection between Open Model and Dropped Iterations? The Test lifecycle doc states the following:
But doesn't it contradict the Open Model? Which states:
As I understand it, to avoid coordinated omission, the VU's Now, there's also a question of what it means for a VU to be "free" or "busy". The Dropped Iterations doc states the following:
Here I don't really understand what "free VU" or "busy VU" means. Is a VU that has an iteration in progress considered "busy"? How does that interplay with Open Model, which must run VU's function on a strict schedule? Another question about free/busy arises from the fact that Dropped Iterations article sheds some light on that:
So there's clearly a separation between "iteration duration" and "response". But how exactly are these defined? Is it related to Hope my questions make sense! And thank you very much for the amazing work you are doing here, systemazing and documenting benchmark approaches is super important! |
Hey @folex , these are great questions; maybe you should make an issue. I no longer work on the k6 docs, and I don't know if anyone else will see this thread. But I'm still a k6 fan, and I did write the sentences you quoted, so I'll try to comment on a few things.
I can't remember the difference between iteration time and response time, but I think each iteration could make many requests, all with unique response durations. The iteration duration is the entire VU code, including all requests, sleep time, console logs, etc. About the async stuff, I have no idea. I defer to @mstoykov 🙇 |
Hi @folex, A busy VU is one that is executing an iteration. A free one is one that isn't executing one ;). An iteration started ends only ones it has no async jobs to finish irregardless of if the This is predominantly as anything else would've been really confusing. export default function() {
http.asyncRequest("GET", someurl).then((res) => {
// do something
})
} Should this code with ... The answer we(me specifically) choose that this isn't really a good idea and instead k6 waits for everything async to finish. It already needed to wait for them at the end of k6 run for example - so it made sense to not "contaminate" between iterations. So in this case a busy VU is busy even if it is waiting on async operation to finish. And as such an iteration and its duration is the whole time between the iteration starting and the last if any async operation finishing. As @MattDodsonEnglish pointed out iterations can have multiple responses - or none. K6 just happens to be primarily used in cases where there are things such as request and responses, but this is not really a requirement.
While in theory this is true - reality makes it impossible to just continue to "start" stuff - which is why if we can't, we tell users that we dropped iterations - which means that the test should probably be marked failed. Even outside of k6 this isn't really possible - you will run out of resource - memory, CPU, file descriptors and so on. In the case of k6 another thing that is limiting is k6 runs JS - js is a single-threaded language by specification. As such in order for k6 to run an iteration it needs a js VM that is not running any at the moment. Which is basically what a VU is. In theory, you can keep making VUs, but:
Part of what k6 does is run the same test so you can compare and contrast. As such if the test used to pass with X preallocatedVUs, and now it is dropping a bunch of iterations - that is probably pretty bad. On the other hand there was some discussion around letting arrival rate specifically running multiple async default functions. But that has not been implemented and likely will need some limits as well - as just running 2000 default iterations in the same VU will likely not run great. Additionally, at the time this was discussed we were just adding async code and still mostly did not have many uses for it. Not certain if I answered all your questions, but I hope this helps. 🙇 Also commenting on merged PRs isn't the best idea. Maybe try the community forum as I very much by chance saw that someone asked me something on a merged PR. |
Part of work in #808 .
As I tried to add explanatory text about VU allocation and dropped metrics, I struggled to put the content in place. Gradually, I moved the content from being grouped together in the top-level executors page, to being separate topics, to moving them to a new section for explanatory texts about scenarios. The organization still seemed incoherent, so I moved the other explanatory pages to a new section. This information architecture is the best I design I could come up to make it easy to find information and to make the work of positioning new topics much easier.
This work has a few steps: