Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve semantic inconsistencies for non traditional messaging #1027

Merged
merged 16 commits into from
Oct 21, 2020

Conversation

kenfinnigan
Copy link
Member

Fixes #977

Changes

Update tracing semantic conventions for messaging systems to better reflect "non-traditional" approaches such as Apache Kafka.

Include recommended semantic attributes for Apache Kafka

@kenfinnigan
Copy link
Member Author

Can anyone share what the "check-errors" failures are?

@arminru arminru added area:semantic-conventions Related to semantic conventions spec:trace Related to the specification/trace directory labels Sep 28, 2020
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
Comment on lines 175 to 201
| `net.peer.name` | `"ms"` | `"ms"` | `"ms"` |
| `net.peer.port` | `1234` | `1234` | `1234` |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you remove the hostname and port from the previous version? Please leave this as-is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was deliberately removed as it's not reasonable to capture that information.

What is it meant to represent in a clustered broker situation? You don't know which of the possible brokers the message actually was processed by when sending a message

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to specify the endpoint to which the message was sent to (or received from) which should also be available in a clustered situation. Can you please elaborate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm sending/receiving messages with Kafka, there is no information on the consumer or producer side that knows what actual broker sent/received the message.

All the producer/consumer knows is it communicated with a particular bootstrap server to send/receive. That server is not the one that sent/received the message, nor is it constant as the cluster can be repartitioned.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it still make sense to specify the bootstrap server as this is the direct counterpart for this span here, or would you not deem this information valuable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, as it could disappear at any point in time.

It's the initial point to which a consumer/producer retrieves the partitions, but where messages are sent can change as the partitioned cluster changes.

From a traceability perspective, I don't think the bootstrap server names would be meaningful

Copy link
Member

@Oberon00 Oberon00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it is nice that messaging semantic conventions are made fit for non-traditional messaging, please also keep compatibility with traditional messaging systems (they are still in use).
It seems to me that rather just than adding support for modern messaging systems, this PR changes the old messaging conventions quite a bit more than necessary. I think at this stage it is totally OK to have a few breaking changes, but they need to be justified.

EDIT: E.g. you often remove attributes with the argument that they cannot be filled out reasonably. While this may be true for the modern messaging systems you think of, it is not true for traditional messaging systems. Equivalents of almost all of the attributes here have been successfully used within Dynatrace for a long time, and are thus certainly useful for at least some messaging systems (often JMS based). Maybe we have to make some previously required attributes optional though.

@kenfinnigan
Copy link
Member Author

@Oberon00 @arminru, thanks for the feedback.

I've revised the PR to hopefully align better with the consensus view. Apologies for being too zealous in removing some things that only apply to traditional messaging

@Oberon00 Oberon00 dismissed their stale review September 28, 2020 17:15

Looks better now

@iNikem
Copy link
Contributor

iNikem commented Sep 29, 2020

I don't think this PR makes a clear definition of what is a traditional system and what not. If spec has wording like "Only applicable to traditional messaging systems", then we have to define in a more precise manner, which systems are "traditional" and which not.

specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
Additionally at least one of `net.peer.name` or `net.peer.ip` from the [network attributes][] is required and `net.peer.port` is recommended.
Furthermore, it is strongly recommended to add the [`net.transport`][] attribute and follow its guidelines, especially for in-process queueing systems (like [Hangfire][], for example).
It is strongly recommended to add the [`net.transport`][] attribute and follow its guidelines, especially for in-process queueing systems (like [Hangfire][], for example).
`messaging.service` refers to the logical name of the external broker or messaging system where a message was sent to, or received from. In an environment such as Kubernetes, it would be the Kubernetes Service Name.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please clarify how this differs from the peer.service general span attribute?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My impression of peer.service is a service running on a specific peer, or host, of a network.

In an environment like Kubernetes a single "peer", or node, can have many different services exposed on it. Meaning that sending a message to a Kafka service named "my-kafka" means more than a peer.service of "mycluster.kube", or some permutation thereof.

It's possible I'm missing something, but that's my impression, which is why I felt a separate message.service was needed. Granted there might be situations where they are the same, but in Kubernetes that would be rare

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would interpret peer.service to be the particular service this span is directed at, which also works if a given peer or host exposes multiple different services.

@anuraaga do you think we could clarify this further in the spec?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arminru am I correct in saying your interpretation of peer.service is actually the name of the service?

From my perspective, all the other peer.* or net.peer.* information relates to specific host or network node information, which is why I see peer.service as referring to a service on a specific host, as opposed to a virtual service name that could be clustered across many hosts/nodes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. net.* is about lower-level networking information where net.peer.* is about the host on the other end. peer.service on the other hand is about the higher-level service. I don't think there's any other peer.* attribute.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about that. The peer.service attribute is something that is expected to be manually configured by the user, not automatically determined by the instrumentation.

See https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/semantic_conventions/span-general.md#general-remote-service-attributes

Users can define what the name of a service is based on their particular semantics in their distributed system. Instrumentations SHOULD provide a way for users to configure this name.
...
Examples of peer.service that users may specify:

CC @anuraaga who introduced peer.service in #652

Still it sounds like this concept is not specific to messaging, so maybe we should have peer.detected_service if peer.service is otherwise fitting.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iNikem did you mean messaging.service duplicates peer.service or Resource's service.name? Personally, it maybe overlaps with the latter and not the former.

I don't see peer.service being a great name for what it is, but if that's the preferred approach I will remove messaging.service

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They actually belong together.
Given, for example, a client A sending a request to a server process B exposing a service called X:
A creates a span for the request and sets peer.service="X"
B, on the other end, sets service.name="X" on its resource, which will be added to a span it creates for tracking the processing of A's request

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @arminru, it hasn't been clear where Resource fits, that helps.

I will drop message.service and use peer.service

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Please add a note in the Kafka section saying it is recommended to set this and how to determine the appropriate value to make sure instrumentation writers are aware of it.

@anuraaga please verify if you think it's fine that instrumentations set this automatically (while still allowing users to override it, although I'm not really sure how this will work in practice).

| `messaging.message_id` | A value used by the messaging system as an identifier for the message, represented as a string. | No |
| `messaging.conversation_id` | The [conversation ID](#conversations) identifying the conversation to which the message belongs, represented as a string. Sometimes called "Correlation ID". | No |
| `messaging.message_payload_size_bytes` | The (uncompressed) size of the message payload in bytes. Also use this attribute if it is unknown whether the compressed or uncompressed payload size is reported. | No |
| `messaging.message_payload_compressed_size_bytes` | The compressed size of the message payload in bytes. | No |

Additionally at least one of `net.peer.name` or `net.peer.ip` from the [network attributes][] is required and `net.peer.port` is recommended.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you remove this sentence on purpose? Why?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

net.peer.name or net.peer.ip are not required for Kafka, MQTT, etc.

I can change it to say it's recommended instead?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't you just specify the hostname (or IP) of the broker that particular message is sent to here in case of Kafka? Or are you saying there are client libraries where this information is not available and therefore instrumentations would merely be able to specify (one of) the bootstrap server(s) here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Such information as to where the message was sent to or received from, is not available in the Kafka clients.

As mentioned in another comment, I don't think bootstrap servers is the right information to be capturing instead

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not technically correct. There is org.apache.kafka.clients.consumer.Consumer#partitionsFor which gives you a leader node for a given topic and partition. Producer gives the same metadata. ConsumerRebalanceListener can be used to refresh that metadata when it changes.

Thus you can get that information for Kafka. Is it feasible or not is totally separate question.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iNikem Saying information is available somewhere is meaningless if it can't be proven to be accessible. From an instrumentation perspective, it might be possible to retrieve that information, but if it can't be done by frameworks or user applications, for me that makes it unusable.

Separate to that is the meaning. If it's an ephemeral value that is subject to frequent change, what is the value in capturing that information?

Why is there a concern about messaging systems such as Apache Kafka not defining these values?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

net.* attributes, which by semantic convention can be applied to all spans, allow for constructing an accurate network-level map of monitored distributed system. Together with metrics it can help end-user to discover that these specific nodes has higher than usual error rate. And potentially even take a remedial action, such as take faulty nodes down.

it's an ephemeral value that is subject to frequent change

Frequency of change is very subjective. You may have a fixed cluster of Kafka brokers. In fact, Kafka being a "storage service", I would expect Kafka nodes to live at least days, if not weeks, but certainly not minutes or hours. To call them "ephemeral" is an exaggeration.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a stretch, it's not a complete exaggeration.

What diagnosis can happen in a situation where a network-level map contains net.* attributes for nodes that are no longer present in the system?

I can certainly see there would be situations where it's necessary to "spin up" another Kafka broker to handle a spike in messages and then disappear again. Isn't that what Kubernetes offers, the power and ease to scale up and then down again?

Granted, there will definitely be peers that are essentially "always there", or near enough, but I still think that information should be a "recommendation" and not a "requirement"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with recommendation.

I hope you don't tear down your Kafka nodes after incoming spike ends. At least wait for all messages to be processed :) Also, event-sourcing systems cannot allow for node disappearing. You have to persist all relevant data. All in all, short-lived Kafka nodes in production is a novel concept to me.

Copy link
Member

@Oberon00 Oberon00 Oct 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What diagnosis can happen in a situation where a network-level map contains net.* attributes for nodes that are no longer present in the system?

You can cross-reference (kafka or k8s or whatever) logs, where the kafka nodes may log their IP when starting.

specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
Comment on lines 175 to 201
| `net.peer.name` | `"ms"` | `"ms"` | `"ms"` |
| `net.peer.port` | `1234` | `1234` | `1234` |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to specify the endpoint to which the message was sent to (or received from) which should also be available in a clustered situation. Can you please elaborate?

@arminru
Copy link
Member

arminru commented Sep 30, 2020

@kenfinnigan Please try to avoid force-pushing since this makes it harder to follow the revisions. We'll squash it anyway when merging so don't worry if you accumulate a couple of commits on your branch here 🙂

@kenfinnigan
Copy link
Member Author

@arminru it was easier to collapse my commits to rebase as the entire structure of the attribute table was changed yesterday.

Sorry if that made things confusing

@arminru
Copy link
Member

arminru commented Sep 30, 2020

@kevingosse I see, yes, adding the YAML representation of the semantic convention which is used for automatically generating the table unfortunately interfered with your PR. Feel free to keep editing the Markdown manually and discussing things there if that's easier for you and then adapt the respective YAML file (semantic_conventions/trace/messaging.yaml) afterwards in order to be consistent (and have a green build).

- Revert changes to existing examples but change them to use RabbitMQ instead of Kafka
- Add specific Kafka example with new attributes
specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
semantic_conventions/trace/messaging.yaml Outdated Show resolved Hide resolved
Comment on lines 241 to 242
| Parent | | Span Prod1 | Span Prod1 | | Span Prod2 |
| Links | | | Span Rcv1 | Span Prod1 | |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| Parent | | Span Prod1 | Span Prod1 | | Span Prod2 |
| Links | | | Span Rcv1 | Span Prod1 | |
| Parent | | Span Prod1 | Span Rcv1 | Span Proc1 | Span Prod2 |
| Links | | | Span Prod1 | | |

Proc1's parent would be Rcv1 in this case as it's Proc1's direct predecessor and Prod1 would be added as a link.
Same for Prod2 being a child of Proc1.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can understand the reasoning for Proc1's parent being Rcv1, but Prod2 is not a child of Proc1.

The reasoning is that in Kafka the producing of a message is disconnected in time from when, or how often, that message is processed.

In this situation, Prod2 may be processed quickly, or it could be days/weeks/months before it's processed. From a span perspective, I don't think it should have a parent of Proc1 due to that timing disconnect.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From your timing diagram and description above it looks like as part of processing of the first message, i.e., in Proc1, the second message would be produced, i.e., Prod2. Is this not the case? For me it looks like Prod2 would be a direct consequence of Proc1. Please note that parents don't have to necessarily enclose their children from a timing perspective but also allow for async relationships.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is performed as part of processing the first message. However, in reactive messaging and event-driven architectures there isn't always a "parent-child" connection, it's more like a correlation or link relationship. They may be "parent-child", but not always.

If it is added as a parent, and it hasn't had any further span, doesn't that leave the parent span/trace unclosed? Does that make sense for reporting?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related #958 / CC @anuraaga wanted kinda the opposite there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Oberon00, I knew there was a better term but couldn't think of it. Reading the issue made me realize it was "follows-from".

So for reactive messaging and event-driven, producing a new message while processing an existing message is more a "follows-from" relationship as opposed to parent-child, which is why I had used Links

@kenfinnigan
Copy link
Member Author

@arminru @Oberon00 I think I've resolved any problems or concerns, please let me know if I missed something.

Can someone also let me know what the error is on the markdown job, thanks?

Copy link
Member

@arminru arminru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kenfinnigan

Regarding the build: It verifies that the entire text between <!-- semconv messaging --> and <!-- endsemconv --> in the markdown is identical to what the markdown generator would generate (since we wanted to have both the YAML definition as well as the rendered MD stored in the repo for better accessibility). If you remove your change in line 128/132, it will be identical and the build will pass. If you think specifying the host is not possible in some cases (as you laid out in #1027 (comment)), you can edit the YAML to not make them not required but recommended.

This can be done by adding

- ref: net.peer.name
  required:
    conditional: If available.
- ref: net.peer.ip
  tag: connection-level
  required:
    conditional: If available.

instead of

constraints:
  - any_of:
    - 'net.peer.name'
    - 'net.peer.ip'

Please also add the Kafka attributes you introduced at the end of messaging.yaml and have the table generated by adding <!-- semconv messaging.kafka --> and <!-- endsemconv --> markers in the Kafka section in the Markdown. By updating the YAML, you'll also make your attributes available to code generators creating constants for use by (auto) instrumentations.

    - id: messaging.kafka
      prefix: messaging.kafka
      extends: messaging
      brief: >
        Attributes for Apache Kafka
      attributes:
        - id: message_key
          type: string
          brief: >
            Message keys in Kafka are used for ...
            They differ from `messaging.message_id` in that they're not unique.
            If they key is `null`, the attribute MUST NOT be set.
          note: >
            If the key type is not string, it's string representation has to be supplied for the attribute.
          examples: 'myKey'
...

null values for span attributes are undefined behavior, therefore the "If they key is null, the attribute MUST NOT be set.".

| `messaging.kafka.client_id` | Client Id for the Consumer or Producer that is handling the message. |
| `messaging.kafka.partition` | Partition the message is sent to. |

For Apache Kafka producers, `peer.service` should be set to the name of the broker or external service the message will be sent to.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For Apache Kafka producers, `peer.service` should be set to the name of the broker or external service the message will be sent to.
For Apache Kafka producers, [`peer.service`](./span-general.md#general-remote-service-attributes) SHOULD be set to the name of the broker or external service the message will be sent to.

This could also be achieved by using - ref with a note in the Kafka section of the YAML file.

specification/trace/semantic_conventions/messaging.md Outdated Show resolved Hide resolved
- Minor update to wording of Apacha Kafka service and peer names for Producers and Consumers
@kenfinnigan
Copy link
Member Author

Thanks @arminru for the detailed explanation of what was needed.

Hopefully I've got it right and the build passes

@kenfinnigan
Copy link
Member Author

Thanks @Oberon00

@arminru arminru requested review from a team October 12, 2020 15:58
@kenfinnigan
Copy link
Member Author

@anuraaga do you have a few minutes to review/approve this, it would be good to get it included. Thanks

Copy link
Contributor

@anuraaga anuraaga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So sorry for the slow review on this - been procrastinating context switching back to messaging, a hard problem which I appreciate the help with :)

Left one comment since it'd be great to make sure we get the spec and instrumentation in sync.

@@ -176,6 +189,25 @@ In RabbitMQ, the destination is defined by an _exchange_ and a _routing key_.
`messaging.destination` MUST be set to the name of the exchange. This will be an empty string if the default exchange is used.
The routing key MUST be provided to the attribute `messaging.rabbitmq.routing_key`, unless it is empty.

#### Apache Kafka
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a tombstone filled in the instrumentation currently, is it useful to have here?

https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/f23ad29187ecea4e742e1dd6a5baeb84020298d4/instrumentation/kafka-clients-0.11/src/main/java/io/opentelemetry/javaagent/instrumentation/kafkaclients/KafkaProducerTracer.java#L52

If not it's ok and we can remove it from the Instrumentation. But this seems like the right time to get out instrumentation synced up with the spec.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might also consider separating out a subfolder for messaging implementations as per #968 - there are so many of them that I worry a single doc just gets unwieldy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problems with the delay, I appreciate everyone is super busy right now and context switching IS difficult.

I can see it being beneficial to identify if a particular trace is related to a tombstone record. I can add it to the Kafka section.

Can the splitting be done in a subsequent PR, or you'd prefer to include it here as well? I can certainly follow up with such a change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Splitting in a separate PR sounds great, thanks

@kenfinnigan
Copy link
Member Author

I've added "tombstone" for Kafka as instrumentation already utilizes it.

@kenfinnigan
Copy link
Member Author

@anuraaga @arminru @Oberon00 is one of you able to restart the CI, looks like "EasyCLA" check hung.

Thanks

@Oberon00 Oberon00 closed this Oct 19, 2020
@Oberon00
Copy link
Member

Closed & reopened to trigger EasyCLA

@Oberon00 Oberon00 reopened this Oct 19, 2020
Copy link
Contributor

@anuraaga anuraaga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@kenfinnigan
Copy link
Member Author

Is this good to be merged or does it require additional approvers?

@arminru arminru requested a review from iNikem October 20, 2020 16:03
@arminru
Copy link
Member

arminru commented Oct 20, 2020

@iNikem had quite some comments/opinions here, so if he deems these solved or otherwise properly addressed, we should be good to merge it. This PR has been open long enough for others to take a look already.

@iNikem
Copy link
Contributor

iNikem commented Oct 21, 2020

@arminru good to go from me

@arminru arminru merged commit 662baae into open-telemetry:master Oct 21, 2020
@arminru
Copy link
Member

arminru commented Oct 21, 2020

Thanks again for your contribution, @kenfinnigan!

@kenfinnigan
Copy link
Member Author

No problem, thanks for the help in getting it merged

@kenfinnigan kenfinnigan deleted the trace-semantic-messaging branch October 21, 2020 14:05
tigrannajaryan pushed a commit that referenced this pull request Oct 22, 2020
carlosalberto pushed a commit to carlosalberto/opentelemetry-specification that referenced this pull request Oct 31, 2024
…telemetry#1027)

Co-authored-by: Armin Ruech <armin.ruech@dynatrace.com>
Co-authored-by: Christian Neumüller <christian+github@neumueller.me>
carlosalberto pushed a commit to carlosalberto/opentelemetry-specification that referenced this pull request Oct 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:semantic-conventions Related to semantic conventions spec:trace Related to the specification/trace directory
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Trace semantic conventions for (non-traditional) messaging
7 participants