Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Add a design for supporting warm replicas. #3121

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

godwinpang
Copy link

This change describes the motivation and implementation details for supporting warm replicas in controller-runtime. I have floated this idea offline with @alvaroaleman to address some really slow sources that we work with that take 10s of minutes to serve the initial list.There is no open issue discussing it. Let me know if that is preferred and I can open one.

Previously discussed in #2005 and #2600

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 18, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @godwinpang!

It looks like this is your first PR to kubernetes-sigs/controller-runtime 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/controller-runtime has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: godwinpang
Once this PR has been reviewed and has the lgtm label, please assign vincepri for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 18, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @godwinpang. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 18, 2025
behavior when manager is not in leader mode. This interface should be as follows:
```go
type WarmupRunnable interface {
NeedWarmup() bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why add a boolean marker on top of the actual interface?

}

// GetWarmupRunnable implements WarmupRunnable
func (c *Controller[request]) GetWarmupRunnable() Runnable {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why return a runnable rather than just have a synchronous Warmup()? Runnable is expected to keep running, this is expected to terminate. There is also a timeout for this in the controller which we should re-use here


## Concerns/Questions
1. Controllers opted into this feature will break the workqueue.depth metric as the controller will
have a pre filled queue before it starts processing items.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed offline, one way to avoid this is to use a metrics wrapper that supresses them until the leader election is won. But not sure if its worth bothering

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this actually break the metric? Sounds like the metric will just show the reality

It might break alerts that assume the queue length should be pretty low, but that's an issue of the alerts.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might break alerts that assume the queue length should be pretty low, but that's an issue of the alerts.

Not sure I quite agree with that. The alerts work today, if we change the behavior here, we break them. To my knowledge there also isn't a metric that indicates if a given replica is the leader, so I don't even see a good way to unbreak them

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but the current definition of the metric is that it should show the length of the queue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To my knowledge there also isn't a metric that indicates if a given replica is the leader

That could be solved, I hope :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but the current definition of the metric is that it should show the length of the queue

Be that as it may, the reality is that there are a lot of ppl that use controller-runtime so Hyrums Law applies - That we always have an empty workqueue when not leader is observable behavior and changing that will break ppl

Copy link
Member

@sbueringer sbueringer Feb 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we maybe shouldn't store the items in the queue at this time because that's observable behavior as well (not only through the metric) and not just make it look like the queue is empty through the metric

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we maybe shouldn't store the items in the queue at this time because that's observable behavior as well (not only through the metric) and not just make it look like the queue is empty through the metric

How would that be obseveable except for through the metric if we don't start the controller?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • logs via logState
  • other workqueue metrics: adds_total, queue_duration_seconds
    • Although I guess we can also fake these. What would happen when the controller starts? I assume we would set the length metric immediately to it's correct value. Similar for adds_total and probably also queue_duration_seconds

I also think folks have programmatic access to the queue (at least by instantiating the queue in controller.Options.NewQueue)

So we don't know what kind of things folks are doing with the queue, e.g. accessing queue length or taking items out of the queue even if the leader election controllers are not running.

## Concerns/Questions
1. Controllers opted into this feature will break the workqueue.depth metric as the controller will
have a pre filled queue before it starts processing items.
2. Ideally, non-leader runnables should block readyz and healthz checks until they are in sync. I am
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will break conversion webhooks. I don't know if there is a good way to figure out if the binary contains a conversion webhook, but if in doubt we have to retain the current behavior

2. Ideally, non-leader runnables should block readyz and healthz checks until they are in sync. I am
not sure what the best way of implementing this is, because we would have to add a healthz check
that blocks on WaitForSync for all the sources started as part of the non-leader runnables.
3. An alternative way of implementing the above is to moving the source starting / management code
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is kind-of already the case, the source uses the cache which is a runnable to get the actual informer and the cache is shared and started before anything else except conversion webhooks (as conversion webhooks might be needed to start it). The problem is just that the cache does not actually cache anything for resources for which no informer was requested and that in turn only happens after the source is started which happens post controller start and thus post leader election


## Motivation
Controllers reconcile all objects during startup / leader election failover to account for changes
in the reconciliation logic. For certain sources, the time to serve the initial list can be
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious. Is it literally the list call that takes minutes or the subsequent reconciliation?

Does this change when the new ListWatch feature is used?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is purely about the time it takes to start reconciling after a replica went down because of a rollout or an unforseen event. Right now, that means we first acquire the leader lease, then sync all caches, then start reconciling. The goal of this doc is to do the second step before we even try to aquire the leader lease, as that will take the time it takes to sync the caches out of the transition time.
Agree the description could be a bit clearer.

Copy link
Member

@sbueringer sbueringer Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's fine. I was just curious about the list call :) (aka the cache sync)

downtime as even after leader election, the controller has to wait for the initial list to be served
before it can start reconciling.

## Proposal
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if there's a lot of churn and the controller doesn't become leader maybe not at all or maybe after a few days?

The queue length will increase while there is nothing that takes items out of the queue.

I know the queue doesn't require significant memory to store an item but is there something we should be concerned about if the queue has eg millions of items (let's say we watch pods and we don't become leader for a month)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worst case it at some point gets oom killed, restarts, does everything again, I don't think this is likely to become an actual issue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah definitely not a big problem. Maybe just good to know

func (cm *controllerManager) Start(ctx context.Context) (err error) {
// ...

// Start the warmup runnables
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At which exact place in the current Start func would this go? (what do we do directly before/after)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially I thought before we start the caches but that would mean that we can not respect the startup timeout in the controller so I guess it has to come after


## Concerns/Questions
1. Controllers opted into this feature will break the workqueue.depth metric as the controller will
have a pre filled queue before it starts processing items.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this actually break the metric? Sounds like the metric will just show the reality

It might break alerts that assume the queue length should be pretty low, but that's an issue of the alerts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants