From aae5269fb6325a395aaf0ca5d61334380bbd98ab Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sat, 2 Sep 2023 18:05:25 -0700 Subject: [PATCH 01/18] docs: vertical scaling Signed-off-by: Julie Vogelman --- docs/scaling.md | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/docs/scaling.md b/docs/scaling.md index 19359f257525..a8deae3a4a6e 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -14,10 +14,30 @@ As of v3.0, the controller supports having a hot-standby for [High Availability] You can scale the controller vertically: -- If you have many workflows, increase `--workflow-workers` and `--workflow-ttl-workers`. -- Increase both `--qps` and `--burst`. +### Adding Goroutines to Increase Concurrency -You will need to increase the controller's memory and CPU. +- If you have many Workflows and you notice they're not being reconciled fast enough, increase `--workflow-workers`. +- If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase `--workflow-ttl-workers`. +- If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase `--pod-cleanup-workers`. +>= v3.5 +- If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase `--cron-workflow-workers`. + +### K8S API Client Side Rate Limiting + +The K8S client library rate limits the messages that can go out. The default values are fairly low. If you frequently see a message similar to this (issued by the library): + +`Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` + +in the Controller log, or for >= v3.5: a warning like this (could be any CR, not just WorkflowTemplate): + +`Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` + +then assuming your K8S API Server can handle it: +- Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `--burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `--burst` > `qps`. + +### Container Resource Requests + +If you observe the Controller using its total allocated CPU or memory, you should increase those. ## Sharding From c9e2f4e10fd10c1acb14adab0b01d8a657e75dae Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sat, 2 Sep 2023 18:08:15 -0700 Subject: [PATCH 02/18] docs: vertical scaling Signed-off-by: Julie Vogelman --- docs/scaling.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/scaling.md b/docs/scaling.md index a8deae3a4a6e..ccee41422772 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -12,7 +12,7 @@ As of v3.0, the controller supports having a hot-standby for [High Availability] ## Vertically Scaling -You can scale the controller vertically: +You can scale the controller vertically in these ways: ### Adding Goroutines to Increase Concurrency @@ -24,11 +24,11 @@ You can scale the controller vertically: ### K8S API Client Side Rate Limiting -The K8S client library rate limits the messages that can go out. The default values are fairly low. If you frequently see a message similar to this (issued by the library): +The K8S client library rate limits the messages that can go out. The default values are fairly low. If you frequently see a message similar to this in the Controller log (issued by the library): `Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` -in the Controller log, or for >= v3.5: a warning like this (could be any CR, not just WorkflowTemplate): +or for >= v3.5: a warning like this (could be any CR, not just WorkflowTemplate): `Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` From a91eadd9a069a6aa1d8ad30498ad1b46fd97b56f Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sat, 2 Sep 2023 18:09:27 -0700 Subject: [PATCH 03/18] docs: vertical scaling Signed-off-by: Julie Vogelman --- docs/scaling.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/scaling.md b/docs/scaling.md index ccee41422772..1bc4d302a33c 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -13,6 +13,9 @@ As of v3.0, the controller supports having a hot-standby for [High Availability] ## Vertically Scaling You can scale the controller vertically in these ways: +### Container Resource Requests + +If you observe the Controller using its total allocated CPU or memory, you should increase those. ### Adding Goroutines to Increase Concurrency @@ -35,9 +38,6 @@ or for >= v3.5: a warning like this (could be any CR, not just WorkflowTemplate) then assuming your K8S API Server can handle it: - Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `--burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `--burst` > `qps`. -### Container Resource Requests - -If you observe the Controller using its total allocated CPU or memory, you should increase those. ## Sharding From 4f6f056c9b314142e3494515df432e5ec9cc4d91 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sat, 2 Sep 2023 18:10:06 -0700 Subject: [PATCH 04/18] docs: vertical scaling Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/scaling.md b/docs/scaling.md index 1bc4d302a33c..33a187dc159e 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -19,6 +19,8 @@ If you observe the Controller using its total allocated CPU or memory, you shoul ### Adding Goroutines to Increase Concurrency +If you have sufficient CPU you can take advantage of it with more goroutines: + - If you have many Workflows and you notice they're not being reconciled fast enough, increase `--workflow-workers`. - If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase `--workflow-ttl-workers`. - If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase `--pod-cleanup-workers`. From eaf134dda75843083a40ed430712b1556a28c8d1 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sat, 2 Sep 2023 18:16:03 -0700 Subject: [PATCH 05/18] docs: fixing make docs Signed-off-by: Julie Vogelman --- .spelling | 1 + docs/scaling.md | 13 ++++++++----- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/.spelling b/.spelling index 0513bf392186..7e2c6631178f 100644 --- a/.spelling +++ b/.spelling @@ -224,6 +224,7 @@ v3.3 v3.3. v3.4 v3.4. +v3.5 validator versioning webHDFS diff --git a/docs/scaling.md b/docs/scaling.md index 33a187dc159e..8cf2e363b9b8 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -13,6 +13,7 @@ As of v3.0, the controller supports having a hot-standby for [High Availability] ## Vertically Scaling You can scale the controller vertically in these ways: + ### Container Resource Requests If you observe the Controller using its total allocated CPU or memory, you should increase those. @@ -22,10 +23,12 @@ If you observe the Controller using its total allocated CPU or memory, you shoul If you have sufficient CPU you can take advantage of it with more goroutines: - If you have many Workflows and you notice they're not being reconciled fast enough, increase `--workflow-workers`. -- If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase `--workflow-ttl-workers`. -- If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase `--pod-cleanup-workers`. +- If you're using `TTLStrategy` in your Workflows and you notice they're not being deleted fast enough, increase `--workflow-ttl-workers`. +- If you're using `PodGC` in your Workflows and you notice the Pods aren't being deleted fast enough, increase `--pod-cleanup-workers`. + >= v3.5 -- If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase `--cron-workflow-workers`. + +- If you're using a lot of `CronWorkflows` and they don't seem to be firing on time, increase `--cron-workflow-workers`. ### K8S API Client Side Rate Limiting @@ -33,13 +36,13 @@ The K8S client library rate limits the messages that can go out. The default val `Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` -or for >= v3.5: a warning like this (could be any CR, not just WorkflowTemplate): +or for >= v3.5: a warning like this (could be any CR, not just `WorkflowTemplate`): `Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` then assuming your K8S API Server can handle it: -- Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `--burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `--burst` > `qps`. +- Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `--burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `--burst` > `qps`. ## Sharding From 5a1e527005066cc04ce8a479d53d5ad410bf9d1f Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sat, 2 Sep 2023 19:16:34 -0700 Subject: [PATCH 06/18] docs: fixing make docs Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index 8cf2e363b9b8..70a380053f35 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -16,7 +16,7 @@ You can scale the controller vertically in these ways: ### Container Resource Requests -If you observe the Controller using its total allocated CPU or memory, you should increase those. +If you observe the Controller using its total request CPU or memory, you should increase those. ### Adding Goroutines to Increase Concurrency From 41cc1e263525bf25dac6364cf25326d98c8dfde5 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:14:49 -0700 Subject: [PATCH 07/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index 70a380053f35..07986c44a19f 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -16,7 +16,7 @@ You can scale the controller vertically in these ways: ### Container Resource Requests -If you observe the Controller using its total request CPU or memory, you should increase those. +If you observe the Controller using its total CPU or memory requests, you should increase those. ### Adding Goroutines to Increase Concurrency From ffb5881a0b009056991c59377fc223d7cce0ccbe Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:15:52 -0700 Subject: [PATCH 08/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index 07986c44a19f..ad489d4a6670 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -20,7 +20,7 @@ If you observe the Controller using its total CPU or memory requests, you should ### Adding Goroutines to Increase Concurrency -If you have sufficient CPU you can take advantage of it with more goroutines: +If you have sufficient CPU cores, you can take advantage of them with more goroutines: - If you have many Workflows and you notice they're not being reconciled fast enough, increase `--workflow-workers`. - If you're using `TTLStrategy` in your Workflows and you notice they're not being deleted fast enough, increase `--workflow-ttl-workers`. From 0d29faea92ff17b1d2c89ca1da350cddb018eef2 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:16:03 -0700 Subject: [PATCH 09/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index ad489d4a6670..ebd4a2f74fa9 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -26,7 +26,7 @@ If you have sufficient CPU cores, you can take advantage of them with more gorou - If you're using `TTLStrategy` in your Workflows and you notice they're not being deleted fast enough, increase `--workflow-ttl-workers`. - If you're using `PodGC` in your Workflows and you notice the Pods aren't being deleted fast enough, increase `--pod-cleanup-workers`. ->= v3.5 +> v3.5 and after - If you're using a lot of `CronWorkflows` and they don't seem to be firing on time, increase `--cron-workflow-workers`. From 7fedc86b56dee5fd075eae7b49ece0c6eb32fee3 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:16:54 -0700 Subject: [PATCH 10/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index ebd4a2f74fa9..f0ebdab20c4b 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -32,7 +32,9 @@ If you have sufficient CPU cores, you can take advantage of them with more gorou ### K8S API Client Side Rate Limiting -The K8S client library rate limits the messages that can go out. The default values are fairly low. If you frequently see a message similar to this in the Controller log (issued by the library): +The K8S client library rate limits the messages that can go out. The default values are fairly low. + +If you frequently see messages similar to this in the Controller log (issued by the library): `Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` From c07cd68d9e5fb6e3ba8bdba9f802db1ab980d9c9 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:18:13 -0700 Subject: [PATCH 11/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index f0ebdab20c4b..c2415f6a34e6 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -42,7 +42,7 @@ or for >= v3.5: a warning like this (could be any CR, not just `WorkflowTemplate `Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` -then assuming your K8S API Server can handle it: +Then, if your K8S API Server can handle more requests: - Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `--burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `--burst` > `qps`. From 2b14b6da637859501626fc713bb35ae90279db09 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:18:41 -0700 Subject: [PATCH 12/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index c2415f6a34e6..8a24da63ea72 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -44,7 +44,7 @@ or for >= v3.5: a warning like this (could be any CR, not just `WorkflowTemplate Then, if your K8S API Server can handle more requests: -- Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `--burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `--burst` > `qps`. +- Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `burst` > `qps`. ## Sharding From b4b80c3b2d8b28bfd38083c71344361d9065d31c Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Sun, 3 Sep 2023 20:21:30 -0700 Subject: [PATCH 13/18] docs: formatting Signed-off-by: Julie Vogelman --- docs/scaling.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/scaling.md b/docs/scaling.md index 8a24da63ea72..d1c7e36875a3 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -36,11 +36,15 @@ The K8S client library rate limits the messages that can go out. The default val If you frequently see messages similar to this in the Controller log (issued by the library): -`Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` +``` +Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t +``` or for >= v3.5: a warning like this (could be any CR, not just `WorkflowTemplate`): -`Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t` +``` +Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t +``` Then, if your K8S API Server can handle more requests: From 24a58f01491ebca6211419d9908a1301f39ab20e Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Mon, 4 Sep 2023 07:29:06 -0700 Subject: [PATCH 14/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index d1c7e36875a3..8c4da0e1bc87 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -36,7 +36,7 @@ The K8S client library rate limits the messages that can go out. The default val If you frequently see messages similar to this in the Controller log (issued by the library): -``` +```txt Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t ``` From 0b6257543e84f745e6e4c214a70e20a3917c2603 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Mon, 4 Sep 2023 07:29:20 -0700 Subject: [PATCH 15/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index 8c4da0e1bc87..92b045bb28db 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -42,7 +42,7 @@ Waited for 7.090296384s due to client-side throttling, not priority and fairness or for >= v3.5: a warning like this (could be any CR, not just `WorkflowTemplate`): -``` +```txt Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t ``` From 52267574b5b264b423478fcd9190de20a82df49b Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Mon, 4 Sep 2023 07:30:32 -0700 Subject: [PATCH 16/18] Update docs/scaling.md Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index 92b045bb28db..3c7a4da86b11 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -40,7 +40,7 @@ If you frequently see messages similar to this in the Controller log (issued by Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t ``` -or for >= v3.5: a warning like this (could be any CR, not just `WorkflowTemplate`): +Or, in >= v3.5, if you see warnings similar to this (could be any CR, not just `WorkflowTemplate`): ```txt Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t From 79a437c30126e5ef1de4cc1b04b38831c6ea1488 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Mon, 4 Sep 2023 15:12:30 -0700 Subject: [PATCH 17/18] docs: add default values for qps and burst Signed-off-by: Julie Vogelman --- docs/scaling.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/scaling.md b/docs/scaling.md index 3c7a4da86b11..1b81cb966503 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -32,7 +32,7 @@ If you have sufficient CPU cores, you can take advantage of them with more gorou ### K8S API Client Side Rate Limiting -The K8S client library rate limits the messages that can go out. The default values are fairly low. +The K8S client library rate limits the messages that can go out. If you frequently see messages similar to this in the Controller log (issued by the library): @@ -48,7 +48,7 @@ Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1a Then, if your K8S API Server can handle more requests: -- Increase both `--qps` and `--burst`. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `burst` > `qps`. +- Increase both `--qps` and `--burst` arguments for the Controller. The `qps` value indicates the average number of queries per second allowed by the K8S Client. The `burst` value is the number of queries/sec the Client receives before it starts enforcing `qps`, so typically `burst` > `qps`. If not set, the default values are `qps=20` and `burst=30` (as of v3.5 (refer to `cmd/workflow-controller/main.go` in case the values change)). ## Sharding From 28a83410b4ef0218d488db9817b39e851f4392e1 Mon Sep 17 00:00:00 2001 From: Julie Vogelman Date: Mon, 4 Sep 2023 17:55:16 -0700 Subject: [PATCH 18/18] docs: fix 'make docs' Signed-off-by: Julie Vogelman --- docs/scaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/scaling.md b/docs/scaling.md index 1b81cb966503..7c6248f0a9a3 100644 --- a/docs/scaling.md +++ b/docs/scaling.md @@ -32,7 +32,7 @@ If you have sufficient CPU cores, you can take advantage of them with more gorou ### K8S API Client Side Rate Limiting -The K8S client library rate limits the messages that can go out. +The K8S client library rate limits the messages that can go out. If you frequently see messages similar to this in the Controller log (issued by the library):