-
Notifications
You must be signed in to change notification settings - Fork 889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First cut at single writer principle. #1574
First cut at single writer principle. #1574
Conversation
specification/metrics/datamodel.md
Outdated
- `Resource` is expected to uniquely identify the source "service" or | ||
"application". | ||
- `Metric` name and `Attribute`s are expected to unqiuely identify a | ||
timeseries generated by the resource. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying my comment from the doc: I believe the InstrumentationLibrary name/version should also be part of single writer requirement. For example, you have two instrumented http libraries in the same process (resource), both producing http.client.duration metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually almost added that, but I'm not 100% convinced.
TL;DR: Do we expect backends to append Resource labels + InstrumentationLibrary-as-labels to gain unique timeseries? For resource, almost certainly (and in Google Cloud we directly map this concept). It's less certain to me that's true on Instrumentation library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmacd any thought on this?
Based on SiG discussions, this need some work defining "Metric Stream" and identity for consistent vocabulary throughout the specification. |
specification/metrics/datamodel.md
Outdated
originating source of truth. In practical terms, this implies the following: | ||
|
||
- All metric data points produce by OTel SDKs must by globally unique. | ||
- `Resource` is expected to uniquely identify the source "service" or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd give a link to the service attributes here and avoid mentioning "application" in quotes. There is no semantic convention family for applications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, and service isn't broad enough to denote what we mean here IMO. I'll just use service with a link as suggested, as the Resource debate belongs in a broader forum.
specification/metrics/datamodel.md
Outdated
|
||
In real systems, particularly in misbehaving systems, there is in fact the | ||
possibility of multiple writers sending data for the same metric and attribute | ||
set. This requirement states that receivers SHOULD presume a single writer was |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case is duplication and the elimination is deduplication, right? If I'm not wrong, would it make sense to stick to this terminology because it's already commonly known/used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, added both here and @jmacd can comment on his original wording.
specification/metrics/datamodel.md
Outdated
When more than one process writes the same timeseries, OTLP data points may | ||
appear to overlap. This condition typically results from misconfiguration, but | ||
can also result from running identical processes (indicative of operating system | ||
or SDK bugs). When there are overlapping points, receivers SHOULD eliminate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this work for Prometheus? Not deduping would be fine (and might be used for redundancy) because everything is cumulative/absolute. Do we care about this behavior for all or is it only for deltas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll let @jmacd comment here. From what I understand this can cause issues in prometheus, but I wasn't able to get any specifics. I'll keep doing some investigation for a better answer here.
- The originating `Resource` | ||
- The metric stream's `name`. | ||
- The attached `Attribute`s | ||
- The metric stream's point kind. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the point kind really intended to be part of the identity? Is it possible to have Timeseries with the exact same set of the above 3 elements and only differing in the point kind? Do any/most metric backends that exist today support this notion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this heavily in the SiG. TL;DR: it's likely an error scenario/misbehaving system that would have the same timeseries with different point types. For the purposes of OTel + Aggregation we do NOT unify them and leave that option to backends to determine the right thing to do.
This isn't about preventing backends from supporting it, but allowing them to do so and not forcing OTel to specify some kind of behavior that works with backends which do not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't about preventing backends from supporting it, but allowing them to do so and not forcing OTel to specify some kind of behavior that works with backends which do not.
On the other hand, if we explicitly declare that this is a valid situation OpenTelemetry-compliant metric sources may start emitting it causing problems for backends that don't expect to see such data. I think this is the case when we need to have a stance and that stance has a consequence for backends. We either explicitly prohibit Timeseries with identical (resource,name,attributes) and different data points or we explicitly allow it, which means backend have to support it otherwise they can't accept OpenTelemetry-compliant data.
I can see how allowing this can be valuable (e.g. I may export a gauge and a histogram for the same instrument with exact same name). My concern here is that if we allow this we may suddenly put vendors in a situation where they can't really accept this data.
Just to be clear: I don't know if this is really a problem for backends. If you know that it is commonly accepted in the industry to emit such timeseries, then I don't see a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On thing I'm not sure I made clear (and it was crystal clear in the discussion).
- The same 3 elements but differing point kind coming out of an SDK is considered an "error state" that should be handled by an SDK.
- The treatment here (and discussion) should be about what happens if the collector (or downstream processor) receives two streams that only differ by point type. The collector treats them as separate strings.
- Backends have the option to issue an error or try to handle the scenario. For the situation where the differences are floats vs. ints we think it's common to do conversion (and may plan to define our own "weak conformance" conversion for the collector). We think the backend has the best chance of issuing a good error message notifying the user they have two independent metric sources reported as if they are the same with different metric types.
TL;DR; This is an error state, we just describe here how the "model" treats that error scenario so progress can be made and the best component has a chance to give the best error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The treatment here (and discussion) should be about what happens if the collector (or downstream processor) receives two streams that only differ by point type. The collector treats them as separate strings.
Our stance for the Collector so far has been that by default it is not in the business of interpreting data or "treating" it in any way. It only does so when explicitly configured by the user to do so, or when a particular non-OTLP format is used and a translation must happen.
Do you have a different expectation from the Collector?
TL;DR; This is an error state, we just describe here how the "model" treats that error scenario so progress can be made and the best component has a chance to give the best error message.
Sounds good, makes sense to me. I may have missed it, but I do not see this in the text. Can we explicitly call out that this is an error state in the text?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our stance for the Collector so far has been that by default it is not in the business of interpreting data or "treating" it in any way.
It only does so when explicitly configured by the user to do so, or when a particular non-OTLP format is used and a translation must happen.Do you have a different expectation from the Collector?
I think some of the built-in operations (label removal, aggregations, etc.) that are available in the baseline image should be clearly specified against the Data Model. Yes, I agree users configure these things, but that doesn't mean we don't clearly outline what they do and ensure their "compatibility" in shape to lead to the least-possible issues with exporters.
Specifically: I think some generally-available collector operations should be specified in a way that they can be used without causing issues across Exporters. If users provide their own processors that do crazier things outside this model, that's fine and outside any guarantees.
Sounds good, makes sense to me. I may have missed it, but I do not see this in the text. Can we explicitly call out that this is an error state in the text?
Thought that I had, let me add that now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with everything being said here. Different point kinds should be considered errors. I agree that the collector's default behavior should be to simply pass the data through. We are trying to lay out correctness conditions for aggregating data and want definitions that help aggregators do the Right Thing when these errors arise.
It seems to me that metric name, the resource, and metric attributes are first class identifiers for metric streams. The point kind and unit are somehow in a different category: these should be treated as distinct for the purposes of pass-through data processing, but should be treated as errors for aggregation.
I'm not sure we have the proper terminology to describe properties of a metric stream that are identifying but erroneous vs identifying and compatible.
Also I think we may be distracted by trying to define what is and is not an error, where we were trying to define what is a stream and its single-writer property. We may have a set of individually valid streams with single writers, that when considered together form an erroneous condition. Example: when two streams have the same name and resource, but distinct point kind and label values. Each stream is valid, but we should never see a mixture of point kinds for a metric name, independent of how many streams it writes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: if we get multiple point kinds with same metric name, how would we know which stream is erroneous? Do we need to register the point-kind + metric name prior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@victlu It's likely impossible for 'open telemetry' to know which is erroneous. a backend that supports schema'd metrics could use the currently registered metric schema to determine which one is 'stale' and ingest the correct one, but the key here is this is an "error' scenario.
Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
|
||
The OTLP Metrics protocol is designed as a standard for transporting metric | ||
data. To describe the intended use of this data and the associated semantic | ||
meaning, OpenTelemetry metric data types will be linked into a framework | ||
meaning, OpenTelemetry metric data stream types will be linked into a framework |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenTelemetry metric data stream
never defined, yet imbued with special meaning in this document
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's defined on line 202, and yes this document imbues it with special meaning. We decided in the metric data model SiG that we needed a term to identify an "in-motion" stream of metric data that OpenTelemetry operates on to differentiate it from a Time Series generated from that stream.
Should I move the definition/description of metric data stream further above? Do you need more motivation for why we want a different term here?
Put simply, a lot of your concerns/comments are around generated TimeSeries from open-telemetry, whereas this document outlines what's required for a Metric Data Stream. See my follow on comments for examples where I think having this distinction helps opentelemetry AND backends.
can be converted directly into Timeseries, and share the same identity | ||
characteristics for a Timeseries. A metric stream is identified by: | ||
|
||
- The originating `Resource` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a requirement? Sometimes I want resource dimensions to be present in a time series, sometimes I don't, it depends on the use case and the cost of cardinality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're taking the view that a TimeSeries generated from OTLP doesn't need the resource. That's completely fine, we're documenting what a Metric Data Stream is and how it's identified. Backends can map these to timeseries however they wish (as is done today), albeit OTel should come with a recommended conversion.
What this specification means is that within OTLP you'll have a resource that's part of your identity (which is inherently true by protocol structure).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are trying to say that points with different resources belong to different streams, even when all of the other identifying properties are identical.
Resource qualifies as identifying, whereas Unit does not qualify as such. Using the reasoning in the previous paragraph to explain this: points with different units, with all else equal, are considered an error. They are not distinct streams.
I tried to apply the same reasoning to data point kind, but ran into trouble. Two points with different kinds, but all else equal, should be considered an error, not distinct streams.
originating source of truth. In practical terms, this implies the following: | ||
|
||
- All metric data streams produce by OTel SDKs must by globally uniquely | ||
produced and free from duplicates. All metric data streams can be uniquely |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
free from duplicates
What is considered a duplicate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was from the @jmacd document. I think this is synonymous with "overlap", defined below. I can update but want to check with Josh to make sure I haven't missed an important point here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking of "duplicate" points as points in a stream with identical start and end timestamps, whereas overlapping points are defined based on the temporality. Cumulative points overlap, in one sense, but two points are compatible when they share a start time. Two cumulative streams would have an overlap condition when multiple start times are mixed with overlapping end times, however. Gauge (instantaneous temporality) and Delta temporality points have different overlap rules.
Still, I think duplicate points are in a class of their own, particularly when we begin to talk about external labels as Prometheus defines them. Duplicate points may be expected, and when they are truly duplicates we typically discard all but one.
- Aggregations of metric streams must only be written from a single logical | ||
source. | ||
__Note: This implies aggregated metric streams must reach one destination__. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The part in bold kinda makes sense to me, but I don't see why this is a requirement of the protocol. The part about single source doesn't make sense to me.
I think protocol needs to support data in such a format that it can be repeatedly re-aggregated, if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how this requirement leads to data that can't be repeatedly re-aggregated. Can you provide a counter example where it's an issue? AFAIK you should be able to continually re-aggregate assuming:
- All incoming streams are uniquely identified (or could be logically identified, e.g. StatsD incoming counters)
- All reaggregations output as a new identity (e.g. different attributes or resource)
In practice that's actually a pretty easy requirement. For backends where you drop resource, you're back to aggregated what looks like the same timeseries from multiple sources but within OTLP we can identify their separate sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are trying to define a "single-writer" property so that we know when it is safe to re-aggregate data. There are ways to process metrics data that remove the single-writer property (e.g., horizontal scaling) and there are ways to restore that property (e.g., having a message-queue in place) to allow re-aggregation.
We are trying to ensure that metrics originate from a single-writer so that we know these transformations will work. For these rules to work, we have to be sure that multiple processes cannot somehow write to the same stream, because that would mean they can overwrite each other. Thus, as @jsuereth implies, we will generate new metric streams following re-aggregation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with couple minor suggestions.
Co-authored-by: Reiley Yang <reyang@microsoft.com>
Co-authored-by: Reiley Yang <reyang@microsoft.com>
Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>
Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>
can be converted directly into Timeseries, and share the same identity | ||
characteristics for a Timeseries. A metric stream is identified by: | ||
|
||
- The originating `Resource` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are trying to say that points with different resources belong to different streams, even when all of the other identifying properties are identical.
Resource qualifies as identifying, whereas Unit does not qualify as such. Using the reasoning in the previous paragraph to explain this: points with different units, with all else equal, are considered an error. They are not distinct streams.
I tried to apply the same reasoning to data point kind, but ran into trouble. Two points with different kinds, but all else equal, should be considered an error, not distinct streams.
- The originating `Resource` | ||
- The metric stream's `name`. | ||
- The attached `Attribute`s | ||
- The metric stream's point kind. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with everything being said here. Different point kinds should be considered errors. I agree that the collector's default behavior should be to simply pass the data through. We are trying to lay out correctness conditions for aggregating data and want definitions that help aggregators do the Right Thing when these errors arise.
It seems to me that metric name, the resource, and metric attributes are first class identifiers for metric streams. The point kind and unit are somehow in a different category: these should be treated as distinct for the purposes of pass-through data processing, but should be treated as errors for aggregation.
I'm not sure we have the proper terminology to describe properties of a metric stream that are identifying but erroneous vs identifying and compatible.
Also I think we may be distracted by trying to define what is and is not an error, where we were trying to define what is a stream and its single-writer property. We may have a set of individually valid streams with single writers, that when considered together form an erroneous condition. Example: when two streams have the same name and resource, but distinct point kind and label values. Each stream is valid, but we should never see a mixture of point kinds for a metric name, independent of how many streams it writes.
originating source of truth. In practical terms, this implies the following: | ||
|
||
- All metric data streams produce by OTel SDKs must by globally uniquely | ||
produced and free from duplicates. All metric data streams can be uniquely |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking of "duplicate" points as points in a stream with identical start and end timestamps, whereas overlapping points are defined based on the temporality. Cumulative points overlap, in one sense, but two points are compatible when they share a start time. Two cumulative streams would have an overlap condition when multiple start times are mixed with overlapping end times, however. Gauge (instantaneous temporality) and Delta temporality points have different overlap rules.
Still, I think duplicate points are in a class of their own, particularly when we begin to talk about external labels as Prometheus defines them. Duplicate points may be expected, and when they are truly duplicates we typically discard all but one.
- Aggregations of metric streams must only be written from a single logical | ||
source. | ||
__Note: This implies aggregated metric streams must reach one destination__. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are trying to define a "single-writer" property so that we know when it is safe to re-aggregate data. There are ways to process metrics data that remove the single-writer property (e.g., horizontal scaling) and there are ways to restore that property (e.g., having a message-queue in place) to allow re-aggregation.
We are trying to ensure that metrics originate from a single-writer so that we know these transformations will work. For these rules to work, we have to be sure that multiple processes cannot somehow write to the same stream, because that would mean they can overwrite each other. Thus, as @jsuereth implies, we will generate new metric streams following re-aggregation.
|
||
## Overlap | ||
|
||
Overlap occurs when more than one metric data point occurs for a data stream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, overlap is the violation of single-writer requirement. Just to link this together with the single writer section, can we call that out?
are overlapping points, receivers SHOULD eliminate points so that there are no | ||
overlaps. Which data to select in overlapping cases is not specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
L333
receivers SHOULD eliminate points so that there are no overlaps
and L338
OpenTelemetry collectors SHOULD export telemetry when they observe overlapping points in data streams
Don't these conflict each other, or is "receivers" meant to mean backends specifically?
When one process starts just as another exits, the appearance of overlapping | ||
points may be expected. In this case, OpenTelemetry collectors SHOULD modify | ||
points at the change-over using interpolation for Sum data points, to reduce | ||
gaps to zero width in these cases, without any overlap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As resource semantic conventions are specified with service.instance.id
, I'm not sure if this is expected or a misconfiguration:
service.namespace,service.name,service.instance.id
triplet MUST be globally unique
So maybe this falls under the advice above – "collectors SHOULD export telemetry when they observe overlapping
points in data streams, so that the user can monitor for erroneous
configurations"?
@yurishkuro are you satisfied with answers? |
this PR reached the required number of approvals and the company diversity of approvals. Also it is active for enough time. Merging. |
* First cut at single writer principle. * Fixes from reviews. * Fixes from lint. * Apply suggestions from code review Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com> * Fixes from review. * Add missing temporality section to help justify overlap section. * Fix markdown lint. * Fixes from review. * Update specification/metrics/datamodel.md Co-authored-by: Reiley Yang <reyang@microsoft.com> * Update specification/metrics/datamodel.md Co-authored-by: Reiley Yang <reyang@microsoft.com> * Fixes from review. * Update specification/metrics/datamodel.md Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com> * Update specification/metrics/datamodel.md Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com> Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com> Co-authored-by: Reiley Yang <reyang@microsoft.com> Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com> Co-authored-by: Sergey Kanzhelev <S.Kanzhelev@live.com>
Fixes #1573
Changes
Adds Single-Writer principle documentation to metrics data model specification