-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit objects cached by DevWorkspace controller to reduce memory usage #652
Conversation
Modify cache used in controller in order to restrict cache to items with the devworkspace_id label. This has the downside of making all objects _without_ that label invisible to the controller, but has the benefit of reduced memory usage on large clusters. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Add labels - controller.devfile.io/watch-configmap - controller.devfile.io/watch-secret which must be set to "true" in order for the DevWorkspace Operator to see the corresponding secret/configmap. This is required (compare to the previous commit) because the controller is not only interested in secrets and configmaps it creates, but also any configmap/secret on the cluster with e.g. the automount label attached. Since each type in the controller gets a single informer, we can only specify a single label selector for the objects we are interested in. This means we cannot have e.g. "has devworkspace_id label OR has mount-to-devworkspace label". Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Restricting the cache to only configmaps with the new label results in existing workspaces failing to reconcile. This occurs because attempting to Get() the configmap from the cluster returns a IsNotFound error, whereas attempting to Create() the configmap returns an AlreadyExists error (Create interacts with the cluster, Get interacts with the cache). To avoid this, if we encounter an AlreadyExists error when attempting to create an object, we optimistically try to update the object (thus adding the required label). This resolves the issue above, as if the obejct is updated, the subsequent Get() call will return the object as expected. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Restricting the controller-runtime cache to specific objects means that once-tracked objects can disappear from the controller's knowledge if the required label is removed. To work around this, it is necessary to update how we sync objects to specifically handle the case where: * client.Get(object) returns IsNotFound * client.Create(object) returns AlreadyExists This occurs because we can't read objects that aren't in the cache, but attempting to create objects collides with the actual object on the cluster. Since the basic flow of Get -> Create/Update is repeated for each type we handle, this commit collects that repeated logic into one package (pkg/provision/sync), allowing object handling to be done in one place. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Adapt the metadata and storage cleanup tasks to use the new sync flow Signed-off-by: Angel Misevski <amisevsk@redhat.com>
/test v8-devworkspace-operator-e2e, v8-che-happy-path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job!
Regarding the changes - nothing to comment, apart from the fact that I don't like (don't like should show that it's personal opinion that can be ignored) that sync func returns errors when object is updated successfully.
Is going to test now.
@@ -26,6 +26,14 @@ const ( | |||
// DevWorkspaceNameLabel is the label key to store workspace name | |||
DevWorkspaceNameLabel = "controller.devfile.io/devworkspace_name" | |||
|
|||
// DevWorkspaceWatchConfigMapLabel marks a configmap so that it is watched by the controller. This label is required on all | |||
// configmaps that should be seen by the controller | |||
DevWorkspaceWatchConfigMapLabel = "controller.devfile.io/watch-configmap" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we need a dedicated label for each of objects type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDYM? For most objects, the devworkspace_id
label is sufficient, but I had to add separate ones for configmaps/secrets since those by default do not have one label they always use. We might run into a similar issue to workaround for Deployments if we continue supporting async
storage, but we could fudge that by using devworkspace_id: all
or something.
We could use one label for both secrets and configmaps, but then we run into the issue of how to name it -- watch-resource
may be unclear as it only applies to configmaps/secrets, and we use different labels for other objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you got my question)
We could use one label for both secrets and configmaps, but then we run into the issue of how to name it -- watch-resource may be unclear as it only applies to configmaps/secrets, and we use different labels for other objects.
That's exactly what I have in mind, including the concern ) So, then I think it makes sense to leave as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One tiny +1 to use common watch annotation - it will allow us to avoid having different articles/section to some docs, like here https://docs.google.com/document/d/1IR78XlxO37VTWXOu-uE-2nKC93D1GNhZomGoRaN518o/edit?usp=sharing
So, we'll just provide a single patch command instead two.
The concern:
We could use one label for both secrets and configmaps, but then we run into the issue of how to name it -- watch-resource may be unclear as it only applies to configmaps/secrets, and we use different labels for other objects.
may be addressed by the following explanation:
DevWorkspace operator watches objects owned by DevWorkspace CR (ones which are in additional labeled with workspace id) or standalone additional objects labeled with
watch: true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that there always is at least some kind of label on the secret/configmap that DWO handles, even if such labels differ depending on the purpose.
I think (I have not tried this out), it should be possible to write an "OR" label selector - if not using the existing code then by implementing a custom labels.Selector
.
I personally think requiring 2 labels on a single object for a single purpose is a little bit weird from the UX perspective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
an "or" selector is apparently impossible, so please ignore me :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally think requiring 2 labels on a single object for a single purpose is a little bit weird from the UX perspective.
The reality is that there are multiple labels that can get applied to configmaps or secrets, and they each serve a different purpose
controller.devfile.io/watch-[secret|configmap]
: mark this secret/configmap as "of interest" to the controller; necessary due to caching changecontroller.devfile.io/mount-to-devworkspace
: mount this resource to the workspace; used by external tools/users to share info across multiple workspacescontroller.devfile.io/git-credential
: mark a secret as holding git credentials, which is handled differently from abovecontroller.devfile.io/devworkspace_id
: associate this resource with the workspace with workspace ID specified.
Cases 2, 3 and 4 can exist independently of each other, e.g. a user-defined mounted configmap won't have the devworkspace_id
label, and the metadata configmap we provision for workspaces won't have the mount-to-devworkspace
label. As a result, there's no label selector we can use here, so we have to add the watch
label to cover all use cases. Moreover, there will be cases when there are secrets/configmaps on the cluster that we're interested in that only have the controller.devfile.io/watch-[secret|configmap]
label and no others.
The concern [...] may be addressed by the following explanation:
DevWorkspace operator watches objects owned by DevWorkspace CR (ones which are in additional labeled with workspace id) or standalone additional objects labeled with watch: true
Potentially, but it's still somewhat unclear: we watch PVCs without the label applied, and secrets/configmaps become invisible if the label is removed, even if it has the workspace ID label. I'm open to using one label for both, but I'm not sure it's a huge gain in documentation burden.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tested through proper load testing flow but with usual testing, it works fine.
Please, after it's merged drop PR for Che operator to update DWO go dep, or create an issue for them.
Is it possible to use non-caching client (like here) instead of adding labels just to manage caches? |
It would be possible, but goes against the intention of the controller design. With caching, we can avoid making API calls out to the cluster most of the time (only really hitting the API server on create/update calls), so I'd be concerned about performance/behavior if we went that route. As a rough example, it takes around 10-15 reconciles to start a workspace, and all of these reconciles tend to happen within the first couple of seconds. If during that process we have to list/get secrets/configmaps multiple times (in the case of mounting git credentials, we might make two secrets, two configmaps, and read secrets + configmaps 2-3 times per reconcile), we're looking at potentially 50+ API requests per started workspace that are not present with caching. We do use a non-caching client in a few places, but those are reserved for startup or one-time tasks. In load testing, I've seen 200+ reconciles per second in the controller, which could result in thousands of get/list calls per second. The other concern I'd have is in throughput, as making actual calls to the API will necessarily be slower than reading from the cache. This could directly impact startup time when under load, which we'd prefer to avoid. |
Update sync methods in devworkspaceRouting controller to use updated sync package where appropriate. Note deleting out-of-date network objects must still be done in the controller, as it's not possible to iterate through a generic list of client.Objects. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Signed-off-by: Angel Misevski <amisevsk@redhat.com>
On Kubernetes, we can't restrict the cache for Routes since they are not a part of the included scheme. As a result we have to work around adding Routes to the cache only on OpenShift. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Pass around the full clusterAPI struct to methods in automount package, to allow for shared Context/Logging. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Signed-off-by: Angel Misevski <amisevsk@redhat.com>
For most objects, we can client.Update() using the spec object without issue. However, for Services, updates are rejected if they try to unset spec.ClusterIP. This means we need to copy the ClusterIP from the cluster service before updating. This commit adds an extensible mechanism for specifying type-specific update functions that are called whenever we attempt to update a cluster object. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Use diffOpts when printing spec vs cluster object diffs when updates are required. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Signed-off-by: Angel Misevski <amisevsk@redhat.com>
/test v8-devworkspace-operator-e2e, v8-che-happy-path |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: amisevsk, JPinkney, sleshchenko The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
@amisevsk: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
v0.10 was branched before this PR was merged; it'll be included in v0.11 |
What does this PR do?
Limits the internal controller cache for objects managed by the DevWorkspace Operator (introduced in controller-runtime
v0.9.0
; see doc). Since we can specify just one selector for limiting the cache, we usedevworkspace_id
label for most objects (deployments, services, etc.)controller.devfile.io/watch-configmap
andcontroller.devfile.io/watch-secret
that must be applied to secrets/configmaps we useThe downside of doing this is that any objects that do not match the selector cannot be read in the controller. This has two impacts on our design:
AlreadyExists
error when trying to create requires us to try and update the object.To address the second point above, rather than rewrite the reconcile for each place its used, I consolidated all syncing (spec vs cluster) into one package and reworked everywhere else to use this.
The main benefit of this change is drastically reduced memory usage on large clusters: for a cluster with 1000 stopped devworkspaces, ~1000 deployments/services/routes, ~5000 configmaps, ~24000 secets, ~26000 rolebindings, we have
~1750Mi
memory usage~90Mi
memory usageThis represents an approximate memory use reduction of 18-19x. In the specific case of the cluster I tested against, it appears (unsurprisingly) that secrets are the main culprit. Testing a variant image that only restricts the cache for secrets reduced memory usage to
~350Mi
.I'm opting to restrict the cache for all objects as otherwise memory usage of DWO depends on the objects that exist on the cluster. With the cache restriction, memory use should be governed mainly by how many DevWorkspaces exist on the cluster.
Additional info
Graph of memory usage for DWO while testing various different cases (each image is restarted 5 times as memory usage spikes on startup):
Note the numbers here don't match the ones listed above exactly as these are internal metrics and the numbers above use podmetrics from the cluster. The cases being tested are:
devworkspace_id
label (easiest case)Diagram for the new sync object flow:
What issues does this PR fix or reference?
Is it tested? How?
Testing might be tricky:
I've tried testing the above locally and haven't seen any issues.
PR Checklist
/test v8-devworkspace-operator-e2e, v8-che-happy-path
to trigger)v8-devworkspace-operator-e2e
: DevWorkspace e2e testv8-che-happy-path
: Happy path for verification integration with Che