Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended load - StatefulSets and PVs. #716

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 90 additions & 11 deletions clusterloader2/testing/load/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,36 @@
{{$NODE_MODE := DefaultParam .NODE_MODE "allnodes"}}
{{$NODES_PER_NAMESPACE := DefaultParam .NODES_PER_NAMESPACE 100}}
{{$PODS_PER_NODE := DefaultParam .PODS_PER_NODE 30}}
{{$LOAD_TEST_THROUGHPUT := DefaultParam .LOAD_TEST_THROUGHPUT 10}}
{{$LOAD_TEST_THROUGHPUT := DefaultParam .LOAD_TEST_THROUGHPUT 100}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revert

{{$BIG_GROUP_SIZE := 250}}
{{$MEDIUM_GROUP_SIZE := 30}}
{{$SMALL_GROUP_SIZE := 5}}
{{$STATEFUL_SET_SIZE := 10}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently one of my comments disappeared in the meantime.
What I was suggesting that with the ability for parallel-pod-management, we may want to create 1 of each: small, medium and large StatefulSet per namespace instead of a corresponding deployment.

This would also simplify the logic below (i.e. you would leave everything as was before and do:

if ENABLE_STATEFUL_SETS {
if small-deployments > 0 {
  small-deployments--
  small-statefulsets++
}
if medium-deployments > 0 {
 ...
...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also fine with (as a first step) getting rid of stateful-sets-per-namespace and explicitly assuming 1 of each.

{{$STATEFUL_SETS_PER_NAMESPACE := 1}}
{{$ENABLE_CHAOSMONKEY := DefaultParam .ENABLE_CHAOSMONKEY false}}
{{$ENABLE_PROMETHEUS_API_RESPONSIVENESS := DefaultParam .ENABLE_PROMETHEUS_API_RESPONSIVENESS false}}
{{$ENABLE_CONFIGMAPS := DefaultParam .ENABLE_CONFIGMAPS false}}
{{$ENABLE_PVS := DefaultParam .ENABLE_PVS false}}
{{$ENABLE_SECRETS := DefaultParam .ENABLE_SECRETS false}}
{{$ENABLE_STATEFULSETS := DefaultParam .ENABLE_STATEFULSETS false}}
#Variables
{{$namespaces := DivideInt .Nodes $NODES_PER_NAMESPACE}}
{{$totalPods := MultiplyInt $namespaces $NODES_PER_NAMESPACE $PODS_PER_NODE}}
{{$podsPerNamespace := DivideInt $totalPods $namespaces}}
{{$saturationTime := DivideInt $totalPods $LOAD_TEST_THROUGHPUT}}
{{$statefulSetPodsPerNamespace := IfThenElse $ENABLE_STATEFULSETS (MultiplyInt $STATEFUL_SET_SIZE $STATEFUL_SETS_PER_NAMESPACE) 0}}
# bigDeployments - 1/4 of namespace pods should be in big Deployments.
{{$bigDeploymentsPerNamespace := DivideInt (MultiplyInt $NODES_PER_NAMESPACE $PODS_PER_NODE) (MultiplyInt 4 $BIG_GROUP_SIZE)}}
{{$bigDeploymentsPerNamespace := DivideInt $podsPerNamespace (MultiplyInt 4 $BIG_GROUP_SIZE)}}
# mediumDeployments - 1/4 of namespace pods should be in medium Deployments.
{{$mediumDeploymentsPerNamespace := DivideInt (MultiplyInt $NODES_PER_NAMESPACE $PODS_PER_NODE) (MultiplyInt 4 $MEDIUM_GROUP_SIZE)}}
{{$mediumDeploymentsPerNamespace := DivideInt $podsPerNamespace (MultiplyInt 4 $MEDIUM_GROUP_SIZE)}}
# smallDeployments - 1/2 of namespace pods should be in small Deployments.
{{$smallDeploymentsPerNamespace := DivideInt (MultiplyInt $NODES_PER_NAMESPACE $PODS_PER_NODE) (MultiplyInt 2 $SMALL_GROUP_SIZE)}}
# Number of small deployments is reduced to make space for stateful sets and stick to the $PODS_PER_NODE density.
{{$smallDeploymentsPerNamespace := DivideInt (SubtractInt (DivideInt $podsPerNamespace 2) $statefulSetPodsPerNamespace) $SMALL_GROUP_SIZE}}

# TODO: Remove
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

{{$bigDeploymentsPerNamespace := 1}}
{{$mediumDeploymentsPerNamespace := 2}}
{{$smallDeploymentsPerNamespace := 10}}

name: load
automanagedNamespaces: {{$namespaces}}
Expand Down Expand Up @@ -115,7 +126,7 @@ steps:
- basename: small-service
objectTemplatePath: service.yaml

- name: Starting measurement for waiting for deployments
- name: Starting measurement for waiting for pods
measurements:
- Identifier: WaitForRunningDeployments
Method: WaitForControlledPodsRunning
Expand All @@ -125,8 +136,18 @@ steps:
kind: Deployment
labelSelector: group = load
operationTimeout: 15m
{{if $ENABLE_STATEFULSETS}}
- Identifier: WaitForRunningStatefulSets
Method: WaitForControlledPodsRunning
Params:
action: start
apiVersion: apps/v1
kind: StatefulSet
labelSelector: group = load
operationTimeout: 15m
{{end}}

- name: Creating Deployments
- name: Creating objects
phases:
- namespaceRange:
min: 1
Expand Down Expand Up @@ -188,15 +209,36 @@ steps:
ReplicasMin: {{$SMALL_GROUP_SIZE}}
ReplicasMax: {{$SMALL_GROUP_SIZE}}
SvcName: small-service
{{if $ENABLE_STATEFULSETS}}
- namespaceRange:
min: 1
max: {{$namespaces}}
replicasPerNamespace: {{$STATEFUL_SETS_PER_NAMESPACE}}
tuningSet: RandomizedSaturationTimeLimited
objectBundle:
- basename: stateful-set
objectTemplatePath: statefulset_service.yaml
- basename: stateful-set
objectTemplatePath: statefulset.yaml
templateFillMap:
ReplicasMin: {{$STATEFUL_SET_SIZE}}
ReplicasMax: {{$STATEFUL_SET_SIZE}}
{{end}}

- name: Waiting for deployments to be running
- name: Waiting for pods to be running
measurements:
- Identifier: WaitForRunningDeployments
Method: WaitForControlledPodsRunning
Params:
action: gather
{{if $ENABLE_STATEFULSETS}}
- Identifier: WaitForRunningStatefulSets
Method: WaitForControlledPodsRunning
Params:
action: gather
{{end}}

- name: Scaling Deployments
- name: Scaling objects
phases:
- namespaceRange:
min: 1
Expand Down Expand Up @@ -234,15 +276,34 @@ steps:
ReplicasMin: {{MultiplyInt $SMALL_GROUP_SIZE 0.5}}
ReplicasMax: {{MultiplyInt $SMALL_GROUP_SIZE 1.5}}
SvcName: small-service
{{if $ENABLE_STATEFULSETS}}
- namespaceRange:
min: 1
max: {{$namespaces}}
replicasPerNamespace: {{$STATEFUL_SETS_PER_NAMESPACE}}
tuningSet: RandomizedScalingTimeLimited
objectBundle:
- basename: stateful-set
objectTemplatePath: statefulset.yaml
templateFillMap:
ReplicasMin: {{MultiplyInt $STATEFUL_SET_SIZE 0.5}}
ReplicasMax: {{MultiplyInt $STATEFUL_SET_SIZE 1.5}}
{{end}}

- name: Waiting for deployments to become scaled
- name: Waiting for objects to become scaled
measurements:
- Identifier: WaitForRunningDeployments
Method: WaitForControlledPodsRunning
Params:
action: gather
{{if $ENABLE_STATEFULSETS}}
- Identifier: WaitForRunningStatefulSets
Method: WaitForControlledPodsRunning
Params:
action: gather
{{end}}

- name: Deleting Deployments
- name: Deleting objects
phases:
- namespaceRange:
min: 1
Expand Down Expand Up @@ -292,13 +353,31 @@ steps:
- basename: small-deployment
objectTemplatePath: secret.yaml
{{end}}
{{if $ENABLE_STATEFULSETS}}
- namespaceRange:
min: 1
max: {{$namespaces}}
replicasPerNamespace: 0
tuningSet: RandomizedSaturationTimeLimited
objectBundle:
- basename: stateful-set
objectTemplatePath: statefulset.yaml
- basename: stateful-set
objectTemplatePath: statefulset_service.yaml
{{end}}

- name: Waiting for Deployments to be deleted
- name: Waiting for pods to be deleted
measurements:
- Identifier: WaitForRunningDeployments
Method: WaitForControlledPodsRunning
Params:
action: gather
{{if $ENABLE_STATEFULSETS}}
- Identifier: WaitForRunningStatefulSets
Method: WaitForControlledPodsRunning
Params:
action: gather
{{end}}

- name: Deleting SVCs
phases:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
ENABLE_PVS: true
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
ENABLE_STATEFULSETS: true
56 changes: 56 additions & 0 deletions clusterloader2/testing/load/statefulset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
{{$EnablePVs := DefaultParam .ENABLE_PVS false}}

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{.Name}}
labels:
group: load
spec:
podManagementPolicy: Parallel
selector:
matchLabels:
name: {{.Name}}
serviceName: {{.Name}}
replicas: {{RandIntRange .ReplicasMin .ReplicasMax}}
template:
metadata:
labels:
group: statefulset
name: {{.Name}}
spec:
terminationGracePeriodSeconds: 1
containers:
- name: {{.Name}}
image: k8s.gcr.io/pause:3.1
ports:
- containerPort: 80
name: web
{{if $EnablePVs}}
volumeMounts:
- name: pv
mountPath: /var/pv
{{end}}
resources:
requests:
cpu: 10m
memory: "10M"
{{if $EnablePVs}}
# TODO(#704): We should have better control over deleting PVs.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned offline - let's maybe first do only StatefulSets and we will extend with PVs in a separate step.

# While this StatefulSet's feature is convenient for creating PVs, there is no easy way to delete
# them later, as deleting StatefulSet doesn't delete PVs. They are going to be eventually deleted
# once namespace is deleted, but ideally we should have a better (more controlled) way of doing
# that, so we can delete them during test, e.g. after StatefulSets are deleted, and measure the
# api call latency for that. There are two options here:
# 1. Create a new measurement that will delete PVs created by StatefulSets
# 2. Create PVs manually, attach them here (but not via volumeClaimTemplates) and delete them
# manually later
volumeClaimTemplates:
- metadata:
name: pv
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Mi
{{end}}
10 changes: 10 additions & 0 deletions clusterloader2/testing/load/statefulset_service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: {{.Name}}
labels:
name: {{.Name}}
spec:
clusterIP: None
selector:
name: {{.Name}}