-
Notifications
You must be signed in to change notification settings - Fork 556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add job load test: create multiple jobs based on the number of nodes #1998
Conversation
/assign @marseel |
Thanks, but no need to review just yet. I'm still trying to get it running :) |
10f138b
to
277ee7b
Compare
This is now ready for review @jprzychodzen |
89cf0ed
to
096dd71
Compare
/retest |
096dd71
to
66d6f33
Compare
66d6f33
to
8c2a468
Compare
Separate Indexed from NonIndexed to compare.
8c2a468
to
32e6596
Compare
or perhaps @marseel |
@@ -0,0 +1 @@ | |||
MODE: Indexed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, newline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to remove the file instead.
@@ -1,12 +1,22 @@ | |||
{{$mode := (DefaultParam .MODE "Indexed")}} | |||
{{$pods_per_node_per_size := 20}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not 10 as default and guard it begin parameter?
30 is a suggested number of Pods per Node in highly scalable cluster. As there are 3 sizes (small/medium/large), each would get 10 pods per node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Updated.
{{$medium_jobs_count := DivideInt $total_pods_per_size $medium_job_size}} | ||
{{$large_job_size := 400}} | ||
{{$large_jobs_count := DivideInt $total_pods_per_size $large_job_size}} | ||
|
||
name: batch | ||
|
||
namespace: | ||
number: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We support only 3000 Pods per Namespace, see [1]. This means that we need single Namespace per 100 Nodes (or 3000 Pods)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the template to have parameters similar to load
.
qpsLoad: | ||
qps: 1 | ||
qps: 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure you want to use so low constant value?
For CL2 test we are using --env=CL2_LOAD_TEST_THROUGHPUT=50
to determine QPS in saturation tunning test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a parameter for it.
And added parameters for nodes per namespace and throughput
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor, jprzychodzen, marseel The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #799
Special notes for your reviewer:
It was not trivial to gather metrics from kcm, so I'm leaving that as a follow up.
Because of this, I increased the resolution of WaitForJobsToFinish gather interval.
Does this PR introduce a user-facing change?: