Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First cut at single writer principle. #1574

Merged
merged 15 commits into from
Apr 16, 2021

Conversation

jsuereth
Copy link
Contributor

Fixes #1573

Changes

Adds Single-Writer principle documentation to metrics data model specification

specification/metrics/datamodel.md Outdated Show resolved Hide resolved
- `Resource` is expected to uniquely identify the source "service" or
"application".
- `Metric` name and `Attribute`s are expected to unqiuely identify a
timeseries generated by the resource.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying my comment from the doc: I believe the InstrumentationLibrary name/version should also be part of single writer requirement. For example, you have two instrumented http libraries in the same process (resource), both producing http.client.duration metric.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually almost added that, but I'm not 100% convinced.

TL;DR: Do we expect backends to append Resource labels + InstrumentationLibrary-as-labels to gain unique timeseries? For resource, almost certainly (and in Google Cloud we directly map this concept). It's less certain to me that's true on Instrumentation library.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmacd any thought on this?

specification/metrics/datamodel.md Outdated Show resolved Hide resolved
@jsuereth
Copy link
Contributor Author

Based on SiG discussions, this need some work defining "Metric Stream" and identity for consistent vocabulary throughout the specification.

specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
originating source of truth. In practical terms, this implies the following:

- All metric data points produce by OTel SDKs must by globally unique.
- `Resource` is expected to uniquely identify the source "service" or
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd give a link to the service attributes here and avoid mentioning "application" in quotes. There is no semantic convention family for applications.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, and service isn't broad enough to denote what we mean here IMO. I'll just use service with a link as suggested, as the Resource debate belongs in a broader forum.


In real systems, particularly in misbehaving systems, there is in fact the
possibility of multiple writers sending data for the same metric and attribute
set. This requirement states that receivers SHOULD presume a single writer was
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case is duplication and the elimination is deduplication, right? If I'm not wrong, would it make sense to stick to this terminology because it's already commonly known/used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, added both here and @jmacd can comment on his original wording.

specification/metrics/datamodel.md Show resolved Hide resolved
When more than one process writes the same timeseries, OTLP data points may
appear to overlap. This condition typically results from misconfiguration, but
can also result from running identical processes (indicative of operating system
or SDK bugs). When there are overlapping points, receivers SHOULD eliminate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work for Prometheus? Not deduping would be fine (and might be used for redundancy) because everything is cumulative/absolute. Do we care about this behavior for all or is it only for deltas?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll let @jmacd comment here. From what I understand this can cause issues in prometheus, but I wasn't able to get any specifics. I'll keep doing some investigation for a better answer here.

@jsuereth jsuereth marked this pull request as ready for review March 25, 2021 17:13
@jsuereth jsuereth requested review from a team March 25, 2021 17:13
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
- The originating `Resource`
- The metric stream's `name`.
- The attached `Attribute`s
- The metric stream's point kind.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the point kind really intended to be part of the identity? Is it possible to have Timeseries with the exact same set of the above 3 elements and only differing in the point kind? Do any/most metric backends that exist today support this notion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this heavily in the SiG. TL;DR: it's likely an error scenario/misbehaving system that would have the same timeseries with different point types. For the purposes of OTel + Aggregation we do NOT unify them and leave that option to backends to determine the right thing to do.

This isn't about preventing backends from supporting it, but allowing them to do so and not forcing OTel to specify some kind of behavior that works with backends which do not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't about preventing backends from supporting it, but allowing them to do so and not forcing OTel to specify some kind of behavior that works with backends which do not.

On the other hand, if we explicitly declare that this is a valid situation OpenTelemetry-compliant metric sources may start emitting it causing problems for backends that don't expect to see such data. I think this is the case when we need to have a stance and that stance has a consequence for backends. We either explicitly prohibit Timeseries with identical (resource,name,attributes) and different data points or we explicitly allow it, which means backend have to support it otherwise they can't accept OpenTelemetry-compliant data.

I can see how allowing this can be valuable (e.g. I may export a gauge and a histogram for the same instrument with exact same name). My concern here is that if we allow this we may suddenly put vendors in a situation where they can't really accept this data.

Just to be clear: I don't know if this is really a problem for backends. If you know that it is commonly accepted in the industry to emit such timeseries, then I don't see a problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On thing I'm not sure I made clear (and it was crystal clear in the discussion).

  1. The same 3 elements but differing point kind coming out of an SDK is considered an "error state" that should be handled by an SDK.
  2. The treatment here (and discussion) should be about what happens if the collector (or downstream processor) receives two streams that only differ by point type. The collector treats them as separate strings.
  3. Backends have the option to issue an error or try to handle the scenario. For the situation where the differences are floats vs. ints we think it's common to do conversion (and may plan to define our own "weak conformance" conversion for the collector). We think the backend has the best chance of issuing a good error message notifying the user they have two independent metric sources reported as if they are the same with different metric types.

TL;DR; This is an error state, we just describe here how the "model" treats that error scenario so progress can be made and the best component has a chance to give the best error message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The treatment here (and discussion) should be about what happens if the collector (or downstream processor) receives two streams that only differ by point type. The collector treats them as separate strings.

Our stance for the Collector so far has been that by default it is not in the business of interpreting data or "treating" it in any way. It only does so when explicitly configured by the user to do so, or when a particular non-OTLP format is used and a translation must happen.

Do you have a different expectation from the Collector?

TL;DR; This is an error state, we just describe here how the "model" treats that error scenario so progress can be made and the best component has a chance to give the best error message.

Sounds good, makes sense to me. I may have missed it, but I do not see this in the text. Can we explicitly call out that this is an error state in the text?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our stance for the Collector so far has been that by default it is not in the business of interpreting data or "treating" it in any way.
It only does so when explicitly configured by the user to do so, or when a particular non-OTLP format is used and a translation must happen.

Do you have a different expectation from the Collector?

I think some of the built-in operations (label removal, aggregations, etc.) that are available in the baseline image should be clearly specified against the Data Model. Yes, I agree users configure these things, but that doesn't mean we don't clearly outline what they do and ensure their "compatibility" in shape to lead to the least-possible issues with exporters.

Specifically: I think some generally-available collector operations should be specified in a way that they can be used without causing issues across Exporters. If users provide their own processors that do crazier things outside this model, that's fine and outside any guarantees.

Sounds good, makes sense to me. I may have missed it, but I do not see this in the text. Can we explicitly call out that this is an error state in the text?

Thought that I had, let me add that now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with everything being said here. Different point kinds should be considered errors. I agree that the collector's default behavior should be to simply pass the data through. We are trying to lay out correctness conditions for aggregating data and want definitions that help aggregators do the Right Thing when these errors arise.

It seems to me that metric name, the resource, and metric attributes are first class identifiers for metric streams. The point kind and unit are somehow in a different category: these should be treated as distinct for the purposes of pass-through data processing, but should be treated as errors for aggregation.

I'm not sure we have the proper terminology to describe properties of a metric stream that are identifying but erroneous vs identifying and compatible.

Also I think we may be distracted by trying to define what is and is not an error, where we were trying to define what is a stream and its single-writer property. We may have a set of individually valid streams with single writers, that when considered together form an erroneous condition. Example: when two streams have the same name and resource, but distinct point kind and label values. Each stream is valid, but we should never see a mixture of point kinds for a metric name, independent of how many streams it writes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: if we get multiple point kinds with same metric name, how would we know which stream is erroneous? Do we need to register the point-kind + metric name prior?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@victlu It's likely impossible for 'open telemetry' to know which is erroneous. a backend that supports schema'd metrics could use the currently registered metric schema to determine which one is 'stale' and ingest the correct one, but the key here is this is an "error' scenario.

specification/metrics/datamodel.md Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved
specification/metrics/datamodel.md Show resolved Hide resolved
specification/metrics/datamodel.md Outdated Show resolved Hide resolved

The OTLP Metrics protocol is designed as a standard for transporting metric
data. To describe the intended use of this data and the associated semantic
meaning, OpenTelemetry metric data types will be linked into a framework
meaning, OpenTelemetry metric data stream types will be linked into a framework
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenTelemetry metric data stream

never defined, yet imbued with special meaning in this document

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's defined on line 202, and yes this document imbues it with special meaning. We decided in the metric data model SiG that we needed a term to identify an "in-motion" stream of metric data that OpenTelemetry operates on to differentiate it from a Time Series generated from that stream.

Should I move the definition/description of metric data stream further above? Do you need more motivation for why we want a different term here?

Put simply, a lot of your concerns/comments are around generated TimeSeries from open-telemetry, whereas this document outlines what's required for a Metric Data Stream. See my follow on comments for examples where I think having this distinction helps opentelemetry AND backends.

can be converted directly into Timeseries, and share the same identity
characteristics for a Timeseries. A metric stream is identified by:

- The originating `Resource`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a requirement? Sometimes I want resource dimensions to be present in a time series, sometimes I don't, it depends on the use case and the cost of cardinality.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're taking the view that a TimeSeries generated from OTLP doesn't need the resource. That's completely fine, we're documenting what a Metric Data Stream is and how it's identified. Backends can map these to timeseries however they wish (as is done today), albeit OTel should come with a recommended conversion.

What this specification means is that within OTLP you'll have a resource that's part of your identity (which is inherently true by protocol structure).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to say that points with different resources belong to different streams, even when all of the other identifying properties are identical.

Resource qualifies as identifying, whereas Unit does not qualify as such. Using the reasoning in the previous paragraph to explain this: points with different units, with all else equal, are considered an error. They are not distinct streams.

I tried to apply the same reasoning to data point kind, but ran into trouble. Two points with different kinds, but all else equal, should be considered an error, not distinct streams.

originating source of truth. In practical terms, this implies the following:

- All metric data streams produce by OTel SDKs must by globally uniquely
produced and free from duplicates. All metric data streams can be uniquely
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

free from duplicates

What is considered a duplicate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was from the @jmacd document. I think this is synonymous with "overlap", defined below. I can update but want to check with Josh to make sure I haven't missed an important point here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking of "duplicate" points as points in a stream with identical start and end timestamps, whereas overlapping points are defined based on the temporality. Cumulative points overlap, in one sense, but two points are compatible when they share a start time. Two cumulative streams would have an overlap condition when multiple start times are mixed with overlapping end times, however. Gauge (instantaneous temporality) and Delta temporality points have different overlap rules.

Still, I think duplicate points are in a class of their own, particularly when we begin to talk about external labels as Prometheus defines them. Duplicate points may be expected, and when they are truly duplicates we typically discard all but one.

Comment on lines +248 to +250
- Aggregations of metric streams must only be written from a single logical
source.
__Note: This implies aggregated metric streams must reach one destination__.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The part in bold kinda makes sense to me, but I don't see why this is a requirement of the protocol. The part about single source doesn't make sense to me.

I think protocol needs to support data in such a format that it can be repeatedly re-aggregated, if necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how this requirement leads to data that can't be repeatedly re-aggregated. Can you provide a counter example where it's an issue? AFAIK you should be able to continually re-aggregate assuming:

  1. All incoming streams are uniquely identified (or could be logically identified, e.g. StatsD incoming counters)
  2. All reaggregations output as a new identity (e.g. different attributes or resource)

In practice that's actually a pretty easy requirement. For backends where you drop resource, you're back to aggregated what looks like the same timeseries from multiple sources but within OTLP we can identify their separate sources.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to define a "single-writer" property so that we know when it is safe to re-aggregate data. There are ways to process metrics data that remove the single-writer property (e.g., horizontal scaling) and there are ways to restore that property (e.g., having a message-queue in place) to allow re-aggregation.

We are trying to ensure that metrics originate from a single-writer so that we know these transformations will work. For these rules to work, we have to be sure that multiple processes cannot somehow write to the same stream, because that would mean they can overwrite each other. Thus, as @jsuereth implies, we will generate new metric streams following re-aggregation.

Copy link
Member

@reyang reyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with couple minor suggestions.

jsuereth and others added 3 commits April 2, 2021 13:39
Co-authored-by: Reiley Yang <reyang@microsoft.com>
Co-authored-by: Reiley Yang <reyang@microsoft.com>
jsuereth and others added 2 commits April 6, 2021 12:00
Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>
Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>
specification/metrics/datamodel.md Show resolved Hide resolved
can be converted directly into Timeseries, and share the same identity
characteristics for a Timeseries. A metric stream is identified by:

- The originating `Resource`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to say that points with different resources belong to different streams, even when all of the other identifying properties are identical.

Resource qualifies as identifying, whereas Unit does not qualify as such. Using the reasoning in the previous paragraph to explain this: points with different units, with all else equal, are considered an error. They are not distinct streams.

I tried to apply the same reasoning to data point kind, but ran into trouble. Two points with different kinds, but all else equal, should be considered an error, not distinct streams.

- The originating `Resource`
- The metric stream's `name`.
- The attached `Attribute`s
- The metric stream's point kind.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with everything being said here. Different point kinds should be considered errors. I agree that the collector's default behavior should be to simply pass the data through. We are trying to lay out correctness conditions for aggregating data and want definitions that help aggregators do the Right Thing when these errors arise.

It seems to me that metric name, the resource, and metric attributes are first class identifiers for metric streams. The point kind and unit are somehow in a different category: these should be treated as distinct for the purposes of pass-through data processing, but should be treated as errors for aggregation.

I'm not sure we have the proper terminology to describe properties of a metric stream that are identifying but erroneous vs identifying and compatible.

Also I think we may be distracted by trying to define what is and is not an error, where we were trying to define what is a stream and its single-writer property. We may have a set of individually valid streams with single writers, that when considered together form an erroneous condition. Example: when two streams have the same name and resource, but distinct point kind and label values. Each stream is valid, but we should never see a mixture of point kinds for a metric name, independent of how many streams it writes.

originating source of truth. In practical terms, this implies the following:

- All metric data streams produce by OTel SDKs must by globally uniquely
produced and free from duplicates. All metric data streams can be uniquely
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking of "duplicate" points as points in a stream with identical start and end timestamps, whereas overlapping points are defined based on the temporality. Cumulative points overlap, in one sense, but two points are compatible when they share a start time. Two cumulative streams would have an overlap condition when multiple start times are mixed with overlapping end times, however. Gauge (instantaneous temporality) and Delta temporality points have different overlap rules.

Still, I think duplicate points are in a class of their own, particularly when we begin to talk about external labels as Prometheus defines them. Duplicate points may be expected, and when they are truly duplicates we typically discard all but one.

Comment on lines +248 to +250
- Aggregations of metric streams must only be written from a single logical
source.
__Note: This implies aggregated metric streams must reach one destination__.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to define a "single-writer" property so that we know when it is safe to re-aggregate data. There are ways to process metrics data that remove the single-writer property (e.g., horizontal scaling) and there are ways to restore that property (e.g., having a message-queue in place) to allow re-aggregation.

We are trying to ensure that metrics originate from a single-writer so that we know these transformations will work. For these rules to work, we have to be sure that multiple processes cannot somehow write to the same stream, because that would mean they can overwrite each other. Thus, as @jsuereth implies, we will generate new metric streams following re-aggregation.


## Overlap

Overlap occurs when more than one metric data point occurs for a data stream
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, overlap is the violation of single-writer requirement. Just to link this together with the single writer section, can we call that out?

Comment on lines +333 to +334
are overlapping points, receivers SHOULD eliminate points so that there are no
overlaps. Which data to select in overlapping cases is not specified.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L333

receivers SHOULD eliminate points so that there are no overlaps

and L338

OpenTelemetry collectors SHOULD export telemetry when they observe overlapping points in data streams

Don't these conflict each other, or is "receivers" meant to mean backends specifically?

Comment on lines +344 to +347
When one process starts just as another exits, the appearance of overlapping
points may be expected. In this case, OpenTelemetry collectors SHOULD modify
points at the change-over using interpolation for Sum data points, to reduce
gaps to zero width in these cases, without any overlap.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As resource semantic conventions are specified with service.instance.id, I'm not sure if this is expected or a misconfiguration:

service.namespace,service.name,service.instance.id triplet MUST be globally unique

So maybe this falls under the advice above – "collectors SHOULD export telemetry when they observe overlapping
points in data streams, so that the user can monitor for erroneous
configurations"?

@SergeyKanzhelev
Copy link
Member

@yurishkuro are you satisfied with answers?

@SergeyKanzhelev
Copy link
Member

this PR reached the required number of approvals and the company diversity of approvals. Also it is active for enough time. Merging.

@SergeyKanzhelev SergeyKanzhelev merged commit 0a7781c into open-telemetry:main Apr 16, 2021
@jsuereth jsuereth deleted the wip-single-writer branch April 16, 2021 22:25
carlosalberto pushed a commit to carlosalberto/opentelemetry-specification that referenced this pull request Oct 31, 2024
* First cut at single writer principle.

* Fixes from reviews.

* Fixes from lint.

* Apply suggestions from code review

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>

* Fixes from review.

* Add missing temporality section to help justify overlap section.

* Fix markdown lint.

* Fixes from review.

* Update specification/metrics/datamodel.md

Co-authored-by: Reiley Yang <reyang@microsoft.com>

* Update specification/metrics/datamodel.md

Co-authored-by: Reiley Yang <reyang@microsoft.com>

* Fixes from review.

* Update specification/metrics/datamodel.md

Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>

* Update specification/metrics/datamodel.md

Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
Co-authored-by: Reiley Yang <reyang@microsoft.com>
Co-authored-by: Max Edmands <443070+maxedmands@users.noreply.github.com>
Co-authored-by: Sergey Kanzhelev <S.Kanzhelev@live.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Metrics Datamodel: Write "Single-Writer" assumptions to DataModel specification.
10 participants