-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Counter and MinMaxSumCount aggregators thread safe #439
Changes from 3 commits
6e6f6db
1f9444e
2482e09
80d73ab
e9580d8
53ae9ae
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,6 +13,7 @@ | |
# limitations under the License. | ||
|
||
import abc | ||
import threading | ||
from collections import namedtuple | ||
|
||
|
||
|
@@ -47,13 +48,16 @@ def __init__(self): | |
super().__init__() | ||
self.current = 0 | ||
self.checkpoint = 0 | ||
self._lock = threading.Lock() | ||
|
||
def update(self, value): | ||
self.current += value | ||
with self._lock: | ||
self.current += value | ||
|
||
def take_checkpoint(self): | ||
self.checkpoint = self.current | ||
self.current = 0 | ||
with self._lock: | ||
self.checkpoint = self.current | ||
self.current = 0 | ||
|
||
def merge(self, other): | ||
self.checkpoint += other.checkpoint | ||
|
@@ -63,46 +67,45 @@ class MinMaxSumCountAggregator(Aggregator): | |
"""Agregator for Measure metrics that keeps min, max, sum and count.""" | ||
|
||
_TYPE = namedtuple("minmaxsumcount", "min max sum count") | ||
_EMPTY = _TYPE(None, None, None, 0) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't sum be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good question, I don't have a strong feeling about that, if you have please let me know and I'll change it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it should be because of https://en.wikipedia.org/wiki/Identity_element, also not a strong opinion. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Given that, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Either way's fine with me, but I'd expect sum and count to either both be But this is getting dangerously close to philosophy. LGTM as is. |
||
|
||
@classmethod | ||
def _min(cls, val1, val2): | ||
if val1 is None and val2 is None: | ||
return None | ||
return min(val1 or val2, val2 or val1) | ||
|
||
@classmethod | ||
def _max(cls, val1, val2): | ||
if val1 is None and val2 is None: | ||
return None | ||
return max(val1 or val2, val2 or val1) | ||
|
||
@classmethod | ||
def _sum(cls, val1, val2): | ||
if val1 is None and val2 is None: | ||
return None | ||
return (val1 or 0) + (val2 or 0) | ||
def _merge_checkpoint(cls, val1, val2): | ||
ocelotl marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if val1 is cls._EMPTY: | ||
return val2 | ||
if val2 is cls._EMPTY: | ||
return val1 | ||
return cls._TYPE( | ||
min(val1.min, val2.min), | ||
max(val1.max, val2.max), | ||
val1.sum + val2.sum, | ||
val1.count + val2.count, | ||
) | ||
|
||
def __init__(self): | ||
super().__init__() | ||
self.current = self._TYPE(None, None, None, 0) | ||
self.checkpoint = self._TYPE(None, None, None, 0) | ||
self.current = self._EMPTY | ||
self.checkpoint = self._EMPTY | ||
self._lock = threading.Lock() | ||
|
||
def update(self, value): | ||
self.current = self._TYPE( | ||
self._min(self.current.min, value), | ||
self._max(self.current.max, value), | ||
self._sum(self.current.sum, value), | ||
self.current.count + 1, | ||
) | ||
with self._lock: | ||
if self.current is self._EMPTY: | ||
self.current = self._TYPE(value, value, value, 1) | ||
else: | ||
self.current = self._TYPE( | ||
min(self.current.min, value), | ||
max(self.current.max, value), | ||
self.current.sum + value, | ||
self.current.count + 1, | ||
) | ||
|
||
def take_checkpoint(self): | ||
self.checkpoint = self.current | ||
self.current = self._TYPE(None, None, None, 0) | ||
with self._lock: | ||
self.checkpoint = self.current | ||
self.current = self._EMPTY | ||
|
||
def merge(self, other): | ||
self.checkpoint = self._TYPE( | ||
self._min(self.checkpoint.min, other.checkpoint.min), | ||
self._max(self.checkpoint.max, other.checkpoint.max), | ||
self._sum(self.checkpoint.sum, other.checkpoint.sum), | ||
self.checkpoint.count + other.checkpoint.count, | ||
self.checkpoint = self._merge_checkpoint( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think you still need to lock here:
This is based on:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please see #439 (comment). |
||
self.checkpoint, other.checkpoint | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't you need to lock here too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to my understanding
merge()
andtake_checkpoint()
will be never called concurrently. Those methods are invoked inBatcher::process()
that is invoked fromMeter::collect()
that is never concurrent. @lzchen could you please check this reasoning?opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/batcher.py
Lines 87 to 103 in ed25287
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even still, I think it's better not to rely on the behavior of the caller here. Even better: if they're never called concurrently, there will never be lock contention to slow this down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, I added locks to merge too but I didn't added tests for that. We can revisit this later on when considering the use of atomics libraries here.