Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First cut at single writer principle. #1574
First cut at single writer principle. #1574
Changes from 4 commits
ca1d883
0292a4c
b026e90
9be25d0
1660131
f356e15
38cb301
11634c6
aebd668
8052ffe
d45a23f
588b5f7
ccd22f7
c89a6d7
0c1cceb
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a requirement? Sometimes I want resource dimensions to be present in a time series, sometimes I don't, it depends on the use case and the cost of cardinality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're taking the view that a TimeSeries generated from OTLP doesn't need the resource. That's completely fine, we're documenting what a Metric Data Stream is and how it's identified. Backends can map these to timeseries however they wish (as is done today), albeit OTel should come with a recommended conversion.
What this specification means is that within OTLP you'll have a resource that's part of your identity (which is inherently true by protocol structure).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are trying to say that points with different resources belong to different streams, even when all of the other identifying properties are identical.
Resource qualifies as identifying, whereas Unit does not qualify as such. Using the reasoning in the previous paragraph to explain this: points with different units, with all else equal, are considered an error. They are not distinct streams.
I tried to apply the same reasoning to data point kind, but ran into trouble. Two points with different kinds, but all else equal, should be considered an error, not distinct streams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the point kind really intended to be part of the identity? Is it possible to have Timeseries with the exact same set of the above 3 elements and only differing in the point kind? Do any/most metric backends that exist today support this notion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this heavily in the SiG. TL;DR: it's likely an error scenario/misbehaving system that would have the same timeseries with different point types. For the purposes of OTel + Aggregation we do NOT unify them and leave that option to backends to determine the right thing to do.
This isn't about preventing backends from supporting it, but allowing them to do so and not forcing OTel to specify some kind of behavior that works with backends which do not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand, if we explicitly declare that this is a valid situation OpenTelemetry-compliant metric sources may start emitting it causing problems for backends that don't expect to see such data. I think this is the case when we need to have a stance and that stance has a consequence for backends. We either explicitly prohibit Timeseries with identical (resource,name,attributes) and different data points or we explicitly allow it, which means backend have to support it otherwise they can't accept OpenTelemetry-compliant data.
I can see how allowing this can be valuable (e.g. I may export a gauge and a histogram for the same instrument with exact same name). My concern here is that if we allow this we may suddenly put vendors in a situation where they can't really accept this data.
Just to be clear: I don't know if this is really a problem for backends. If you know that it is commonly accepted in the industry to emit such timeseries, then I don't see a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On thing I'm not sure I made clear (and it was crystal clear in the discussion).
TL;DR; This is an error state, we just describe here how the "model" treats that error scenario so progress can be made and the best component has a chance to give the best error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our stance for the Collector so far has been that by default it is not in the business of interpreting data or "treating" it in any way. It only does so when explicitly configured by the user to do so, or when a particular non-OTLP format is used and a translation must happen.
Do you have a different expectation from the Collector?
Sounds good, makes sense to me. I may have missed it, but I do not see this in the text. Can we explicitly call out that this is an error state in the text?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think some of the built-in operations (label removal, aggregations, etc.) that are available in the baseline image should be clearly specified against the Data Model. Yes, I agree users configure these things, but that doesn't mean we don't clearly outline what they do and ensure their "compatibility" in shape to lead to the least-possible issues with exporters.
Specifically: I think some generally-available collector operations should be specified in a way that they can be used without causing issues across Exporters. If users provide their own processors that do crazier things outside this model, that's fine and outside any guarantees.
Thought that I had, let me add that now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with everything being said here. Different point kinds should be considered errors. I agree that the collector's default behavior should be to simply pass the data through. We are trying to lay out correctness conditions for aggregating data and want definitions that help aggregators do the Right Thing when these errors arise.
It seems to me that metric name, the resource, and metric attributes are first class identifiers for metric streams. The point kind and unit are somehow in a different category: these should be treated as distinct for the purposes of pass-through data processing, but should be treated as errors for aggregation.
I'm not sure we have the proper terminology to describe properties of a metric stream that are identifying but erroneous vs identifying and compatible.
Also I think we may be distracted by trying to define what is and is not an error, where we were trying to define what is a stream and its single-writer property. We may have a set of individually valid streams with single writers, that when considered together form an erroneous condition. Example: when two streams have the same name and resource, but distinct point kind and label values. Each stream is valid, but we should never see a mixture of point kinds for a metric name, independent of how many streams it writes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: if we get multiple point kinds with same metric name, how would we know which stream is erroneous? Do we need to register the point-kind + metric name prior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@victlu It's likely impossible for 'open telemetry' to know which is erroneous. a backend that supports schema'd metrics could use the currently registered metric schema to determine which one is 'stale' and ingest the correct one, but the key here is this is an "error' scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The part in bold kinda makes sense to me, but I don't see why this is a requirement of the protocol. The part about single source doesn't make sense to me.
I think protocol needs to support data in such a format that it can be repeatedly re-aggregated, if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how this requirement leads to data that can't be repeatedly re-aggregated. Can you provide a counter example where it's an issue? AFAIK you should be able to continually re-aggregate assuming:
In practice that's actually a pretty easy requirement. For backends where you drop resource, you're back to aggregated what looks like the same timeseries from multiple sources but within OTLP we can identify their separate sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are trying to define a "single-writer" property so that we know when it is safe to re-aggregate data. There are ways to process metrics data that remove the single-writer property (e.g., horizontal scaling) and there are ways to restore that property (e.g., having a message-queue in place) to allow re-aggregation.
We are trying to ensure that metrics originate from a single-writer so that we know these transformations will work. For these rules to work, we have to be sure that multiple processes cannot somehow write to the same stream, because that would mean they can overwrite each other. Thus, as @jsuereth implies, we will generate new metric streams following re-aggregation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
L333
and L338
Don't these conflict each other, or is "receivers" meant to mean backends specifically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As resource semantic conventions are specified with
service.instance.id
, I'm not sure if this is expected or a misconfiguration:So maybe this falls under the advice above – "collectors SHOULD export telemetry when they observe overlapping
points in data streams, so that the user can monitor for erroneous
configurations"?