-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extended load - StatefulSets and PVs. #716
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mm4tt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
231e460
to
4ef02b5
Compare
/test pull-perf-tests-clusterloader2 |
/test pull-perf-tests-clusterloader2-kubemark |
|
||
# StatefulSets | ||
{{$STATEFUL_SET_SIZE := 5}} | ||
{{$STATEFUL_SETS_PER_NAMESPACE := 2}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently there is a feature for parallel-pod-management in statefulset:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management
So I think we should use it and have one of each size per namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
# measure api call latency for that. There are two options here: | ||
# 1. Create a new measurement that will delete PVs created by StatefulSets | ||
# 2. Create PVs manually, attach them here (but not via volumeClaimTemplates) and delete them | ||
# manually later |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think that there is a third option which is faster and simpler to do (and I would vote for proceeding this path at least for now).
PVCs connected to StatefulSet have naming schema: "-statefulset-name>-".
So we can actually create a template file for PVC and we will simply add a step to delete PVCs that would look like:
namespaceRange:
min: 1
max: {{$namespaces}}
replicasPerNamespace: 0
tuningSet: RandomizedSaturationTimeLimited
objectBundle:
- basename: pvname-ssname
objectTemplatePath: pvc.yaml
If at some point we decide for more pvc-per-pod or more statefulset-per-namespace, we can opaque it in range
(ugly but works - I don't we need it immediately).
The only remaining problem we have is that we enumrate from 1 to N, and StatefulSet from 0 to N-1.
But I think we can change that universally for CL, to actually use 0..N-1 scheme (I think that's reasonable on itself).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, but from what I managed to get from code it looks like we're enumerating from 0 in CL2 already. Please correct me if I'm wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it used to be 1..N, but apparently it's not:
https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/simple_test_executor.go#L199
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed f2f, unfortunately this wouldn't work, as CL2 is not able to delete objects that it didn't create. Long story short it keeps it own state of all objects that were created/modified. When we try to delete the PVCs, CL2 doesn't know about them and assumes that none of them exist and doesn't do anything when deleting.
Leaving the TODO as it was, will tackle it in a separate PR.
action: start | ||
apiVersion: apps/v1 | ||
kind: StatefulSet | ||
labelSelector: group = statefulset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is pretty much only selecting controlling objects
(so statefulsets).
So for consistency with deployments (and probably other in the near future), I vote for making that:
group = load
This would mean that they will also be measured as part of pod-startup-time and all stuff that was assuming that all pods actually have group = load label.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, done.
4ef02b5
to
b56c389
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold
Let's wait for kubernetes/test-infra#14174, #772 and #767 to get. One is required to properly roll-out this experiment, two other are baselines that will make reviewing this easier.
|
||
# StatefulSets | ||
{{$STATEFUL_SET_SIZE := 5}} | ||
{{$STATEFUL_SETS_PER_NAMESPACE := 2}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
action: start | ||
apiVersion: apps/v1 | ||
kind: StatefulSet | ||
labelSelector: group = statefulset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, done.
@@ -8,23 +8,36 @@ | |||
{{$NODE_MODE := DefaultParam .NODE_MODE "allnodes"}} | |||
{{$NODES_PER_NAMESPACE := DefaultParam .NODES_PER_NAMESPACE 100}} | |||
{{$PODS_PER_NODE := DefaultParam .PODS_PER_NODE 30}} | |||
{{$LOAD_TEST_THROUGHPUT := DefaultParam .LOAD_TEST_THROUGHPUT 10}} | |||
{{$LOAD_TEST_THROUGHPUT := DefaultParam .LOAD_TEST_THROUGHPUT 100}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert
{{$BIG_GROUP_SIZE := 250}} | ||
{{$MEDIUM_GROUP_SIZE := 30}} | ||
{{$SMALL_GROUP_SIZE := 5}} | ||
{{$STATEFUL_SET_SIZE := 10}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently one of my comments disappeared in the meantime.
What I was suggesting that with the ability for parallel-pod-management, we may want to create 1 of each: small, medium and large StatefulSet per namespace instead of a corresponding deployment.
This would also simplify the logic below (i.e. you would leave everything as was before and do:
if ENABLE_STATEFUL_SETS {
if small-deployments > 0 {
small-deployments--
small-statefulsets++
}
if medium-deployments > 0 {
...
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also fine with (as a first step) getting rid of stateful-sets-per-namespace and explicitly assuming 1 of each.
|
||
# TODO: Remove |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
b56c389
to
0de2ffe
Compare
0de2ffe
to
159b650
Compare
@mm4tt: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
cpu: 10m | ||
memory: "10M" | ||
{{if $EnablePVs}} | ||
# TODO(#704): We should have better control over deleting PVs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned offline - let's maybe first do only StatefulSets and we will extend with PVs in a separate step.
Closing this one, will add StatefulSets separately in #776 and then tackle PVs in a separate PR. |
Ref. #704
As explained in one TODO, PVs are created by StatefulSets but are not deleted until namespaces are deleted (so after test, after all measurements are stopped). This is something we should change.