Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Arrival rate executors review #19

Merged
merged 6 commits into from
Jan 18, 2023
Merged

Arrival rate executors review #19

merged 6 commits into from
Jan 18, 2023

Conversation

immavalls
Copy link
Collaborator

Review to remove the recommendations to use maxVUs when using constant arrival rate and ramping arrival rate executors.

It's usually best to pre-allocate all VUs (and let maxVUs take its default value preAllocatedVUs).

@MattDodsonEnglish
Copy link
Contributor

Before reviewing this, I'm going to wait for review comments on grafana/k6-docs#974

@immavalls
Copy link
Collaborator Author

Before reviewing this, I'm going to wait for review comments on grafana/k6-docs#974

No rush, sounds good to wait on the doc changes before applying those here. Thanks @MattDodsonEnglish

Copy link
Contributor

@MattDodsonEnglish MattDodsonEnglish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I just made comments to trim the language. The narrative flow seems good and technical discussion seems accurate 👍

immavalls and others added 2 commits January 18, 2023 11:13
Co-authored-by: Matt Dodson <47385188+MattDodsonEnglish@users.noreply.github.com>
@immavalls immavalls requested a review from javaducky January 18, 2023 10:16
@immavalls
Copy link
Collaborator Author

@javaducky I'd appreciate your review (not urgent). @MattDodsonEnglish already updated the docs, see https://k6.io/docs/using-k6/scenarios/concepts/ and mostly https://k6.io/docs/using-k6/scenarios/concepts/arrival-rate-vu-allocation/. Thanks!

Copy link
Contributor

@javaducky javaducky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple minor suggestions, otherwise looks fine to me.


Restating once again, the _Ramping Arrival Rate_ executor is focused on the _iteration rate_ over a period of time within stages. k6 aims to eliminate the need to be overly concerned about the actual number of users required to achieve such a rate. As a user of the executor, our script can simply specify the maximum number of VUs allowed, letting k6 handle the actual details. This can be referred to as _autoscaling_.
From our example above, we have that our request duration or latency is, on average, 116.57ms, and the 95 percentile is around 151.35ms. With just 2 VUs (`preAllocatedVUs`), in a very optimistic scenario, we cannot expect much more than `2 VUs / 0.116 s = 17.24 iterations/s`. Even if this was the only factor at play, which we see is not in the next section.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From our example above, we have that our request duration or latency is, on average

The "request duration or latency is" part feels like it should be "request duration--or latency--is." 🤷

Even if this was the only factor at play, which we see is not in the next section.

This sentence seems incomplete.

immavalls and others added 2 commits January 18, 2023 15:35
Co-authored-by: Paul Balogh <javaducky@gmail.com>
@immavalls immavalls merged commit 3ba1787 into main Jan 18, 2023
@immavalls immavalls deleted the imma-executors-review branch January 18, 2023 14:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants