-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove refetching from resourceWatcher #14262
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The resourceWatcher is meant to be a long lived way for a component to receive events about a particular resource from an upstream cache. However, there was a refetching mechanism that would cause a healthy and subscribed watcher to be closed, the resourceWatcher to fetch all the resource types it is watching from the upstream cache and to create a new watcher **every 10 minutes**. This causes unneeded load on the upstream cache and also eats up network bandwidth. This removes the refetching behavior entirely to ensure watchers aren't unnecessarily closed. The change should be transparent to users of the resourceWatcher, but should noticeably reduce both the number of init events being emitted through out a cluster and the number of cache reads. Fixes #14234
Metrics from running a 10k cluster off this branch. Compare against the snapshot from #14234 to see the reduction in init and cache reads. |
fspmarshall
approved these changes
Jul 8, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
red diffs ❤️
espadolini
approved these changes
Jul 8, 2022
zmb3
approved these changes
Jul 10, 2022
@rosstimothy See the table below for backport results.
|
This was referenced Jul 11, 2022
rosstimothy
added a commit
that referenced
this pull request
Jul 11, 2022
The resourceWatcher is meant to be a long lived way for a component to receive events about a particular resource from an upstream cache. However, there was a refetching mechanism that would cause a healthy and subscribed watcher to be closed, the resourceWatcher to fetch all the resource types it is watching from the upstream cache and to create a new watcher **every 10 minutes**. This causes unneeded load on the upstream cache and also eats up network bandwidth. This removes the refetching behavior entirely to ensure watchers aren't unnecessarily closed. The change should be transparent to users of the resourceWatcher, but should noticeably reduce both the number of init events being emitted through out a cluster and the number of cache reads. Fixes #14234 (cherry picked from commit dea633f) # Conflicts: # lib/services/watcher.go
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation |
rosstimothy
added a commit
that referenced
this pull request
Jul 12, 2022
Remove refetching from resourceWatcher (#14262) The resourceWatcher is meant to be a long lived way for a component to receive events about a particular resource from an upstream cache. However, there was a refetching mechanism that would cause a healthy and subscribed watcher to be closed, the resourceWatcher to fetch all the resource types it is watching from the upstream cache and to create a new watcher **every 10 minutes**. This causes unneeded load on the upstream cache and also eats up network bandwidth. This removes the refetching behavior entirely to ensure watchers aren't unnecessarily closed. The change should be transparent to users of the resourceWatcher, but should noticeably reduce both the number of init events being emitted through out a cluster and the number of cache reads. Fixes #14234 (cherry picked from commit dea633f) # Conflicts: # lib/services/watcher.go
rosstimothy
added a commit
that referenced
this pull request
Nov 2, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsytem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 3, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsytem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 4, 2022
* Periodically resync proxies to agents Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
github-actions bot
pushed a commit
that referenced
this pull request
Nov 4, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsytem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 4, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 4, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 4, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 7, 2022
* Periodically resync proxies to agents (#18050) Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 7, 2022
* Periodically resync proxies to agents Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsytem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 7, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
rosstimothy
added a commit
that referenced
this pull request
Nov 7, 2022
Prior to #14262, resource watchers would periodically close their watcher, create a new one and refetch the current set of resources. It turns out that the reverse tunnel subsystem relied on this behavior to periodically broadcast the list of proxies to agents during steady state. Now that watchers are persistent and no longer perform a refetch, agents that are unable to connect to a proxy expire them after a period of time, and since they never receive the periodic refresh, they never attempt to connect to said proxy again. To remedy this, a new ticker is added to the `localsite` that grabs the current set of proxies from its proxy watcher and sends a discovery request to the agent. The frequency of the ticker is set to fire prior to the tracker would expire the proxy so that if a proxy exists in the cluster, then the agent will continually try to connect to it.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The resourceWatcher is meant to be a long lived way for a component
to receive events about a particular resource from an upstream cache.
However, there was a refetching mechanism that would cause a healthy
and subscribed watcher to be closed, the resourceQatcher to fetch all
the resource types it is watching from the upstream cache and to create a
new watcher every 10 minutes. This causes unneeded load on
the upstream cache and also eats up network bandwidth.
This removes the refetching behavior entirely to ensure watchers
aren't unnecessarily closed. The change should be transparent to
users of the resourceWatcher, but should noticeably reduce both
the number of init events being emitted through out a cluster
and the number of cache reads.
Fixes #14234