-
Notifications
You must be signed in to change notification settings - Fork 107
Conversation
@@ -113,6 +126,7 @@ func Fix(in []schema.Point, from, to, interval uint32) []schema.Point { | |||
o -= 1 | |||
} | |||
} | |||
pointSlicePool.Put(in[:0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems backwards to me, I'd have thought that the caller would be responsible for returning in
to the pool, since this function has no way of knowing whether the caller intends to continue using it.
Similarly, it seems like the caller should provide out
so that it can be explicitly responsible for both getting it from the pool and cleaning it up afterwards.
Finally, is there a reason that we don't put the retry and allocation logic into pointSlicePool
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd have thought that the caller would be responsible for returning in to the pool.
Similarly, it seems like the caller should provide out so that it can be explicitly responsible for both getting it from the pool and cleaning it up afterwards.
we can do that. I definitely think the pool interaction for in and out should be consistent (either both in caller, or both within Fix).
so this would make Fix a more pure utility function, which is good. but the downside is pulling out the neededCap := int((last-first)/interval + 1)
calculation out of Fix is a bit weird.
Finally, is there a reason that we don't put the retry and allocation logic into pointSlicePool?
this makes sense. sidenote: I've been thinking of having separate size classes, that sort of stuff would all make sense if we make pointSlicePool a more full-featured "object" rather than merely a sync.Pool instance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, that does making moving out
outside a little tough
in_use space
after:
profiling this shows a 66% reduction in api.Fix but at the expense of itersToPoints allocating more. it's not trivial to compare since the total heap is also different, but based on the other stuff (e.g. gocql, cache AddRange) now being a larger % of mem allocation it confirms we now allocate less space. BTW itersToPoints allocates here:
so it's probably due to it getting a slice from the pool, but it now being smaller than before (because in other places we store smaller slices in the pool?) so this can be addressed with size classes , size hints, or something like that. |
aea25d3
to
ead0b5b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. In the future we will need to revisit the other issues referenced in this PR.
ead0b5b
to
8fde13c
Compare
the main dangerous thing about this PR is when we release a slice to the pool, we have to be sure there are no references to it anywhere else (e.g. in the cache). |
before https://snapshot.raintank.io/dashboard/snapshot/ZGojZ4ApEM77TWkM87LLbb7TcWcKNxsx?orgId=2
after https://snapshot.raintank.io/dashboard/snapshot/WwuhppsQsTwt3JnxmExRFLh52v1I0j85?orgId=2
note significant difference in:
note minor (insignificant?) RAM usage at end of the test. (but started with more ?)
test run is basically