-
-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: Store resources in an Arc
#786
Conversation
Codecov Report
@@ Coverage Diff @@
## master #786 +/- ##
==========================================
+ Coverage 71.99% 72.01% +0.02%
==========================================
Files 55 55
Lines 3724 3727 +3
==========================================
+ Hits 2681 2684 +3
Misses 1043 1043
Continue to review full report at Codecov.
|
This change makes sense to me, though I haven't reviewed the details yet. |
The read-side of `kube_runtime::reflector` is allocation-heavy: each read incurs a clone of the entire resource data structure. This can incur substantial unnecessary load in read-heavy use cases, especially for larger resources. Furthermore, with f66dd90, these allocations are frequently performed while a read-lock is held. This change modifies the `kube_runtime::reflector` to store resources in an atomic, reference-counted cell (`std::sync::Arc`) so that clones do not incur allocation. The goal of this change is to amortize costs so that these allocations are only incurred once--at update-time, before the write-lock is acquired--making reads cheaper and generally reducing lock contention. This changes the public APIs of `kube_runtime`'s `controller`, `reflector`, and `finalizer` modules. Signed-off-by: Oliver Gould <ver@buoyant.io>
Arc
I don't feel passionately one way or the other about this change; but it seemed like a worthwhile proposal. The underlying assumption behind this change is that reads will be more common than updates. This should mostly eliminate allocations from occurring while the lock is held. The notable exception is growing the hashmap, which may occur during individual resource additions (though, not on restart, since the hashmap gets pre-allocated in that case). |
Arc
Arc
@teozkr @clux I'm curious if you're open to this change. I'm thinking about replacing some of our manual indexing with a I think the downside is that the type signatures are a big gnarlier; but this should greatly reduce copying/allocation when reading from a reflector. This also has the side-effect of using the type system to more clearly mark these values as shared/read-only. If mutable access is actually required, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this makes sense. The current model is overly allocation heavy, and this is a pretty simple change to communicate for controller users. Minor request left for a note on it, but this looks clean to me. Very unintrusive on the reflector side.
/// Controller triggers this whenever our main object or our children changed | ||
async fn reconcile(generator: ConfigMapGenerator, ctx: Context<Data>) -> Result<ReconcilerAction, Error> { | ||
async fn reconcile( | ||
generator: Arc<ConfigMapGenerator>, | ||
ctx: Context<Data>, | ||
) -> Result<ReconcilerAction, Error> { | ||
log::info!("working hard"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is the biggest breaking change here imo, and it will need a comment in the changelog to highlight it upon release
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually don't worry about this, i will add something like the following to the CHANGELOG
post merge:
Notices
To reduce the amount of cloning done inside the runtime by the reflectors and controllers (see #786), the following change is needed on the signature of your reconcile
functions:
-async fn reconcile(myobj: MyK, ctx: Context<Data>) -> Result<ReconcilerAction>
+async fn reconcile(myobj: Arc<MyK>, ctx: Context<Data>) -> Result<ReconcilerAction>
The signatures are indeed a bit gnarlier, but it doesn't really leak much to the user side - apart from in the reconcile fn 👍 Thanks for doing this. We haven't really optimized much in this area of things, so this is welcome. |
Builds on kube-rs#786, which did the minimum viable thing by moving the clone into the finalizer module.
Builds on kube-rs#786, which did the minimum viable thing by moving the clone into the finalizer module. Signed-off-by: Teo Klestrup Röijezon <teo@nullable.se>
Motivation
The read-side of
kube_runtime::reflector
is allocation-heavy: each read incurs a clone of the entire resource data structure. This can incur substantial unnecessary load in read-heavy use cases, especially for larger resources. Furthermore, with f66dd90, these allocations are frequently performed while a read-lock is held.Solution
This change modifies the
kube_runtime::reflector
to store resources in an atomic, reference-counted cell (std::sync::Arc
) so that clones do not incur allocation. The goal of this change is to amortize costs so that these allocations are only incurred once--at update-time, before the write-lock is acquired--making reads cheaper and generally reducing lock contention.This changes the public APIs of
kube_runtime
'scontroller
,reflector
, andfinalizer
modules.