-
Notifications
You must be signed in to change notification settings - Fork 811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(otlp-transformer): avoid precision loss when converting HrTime to unix nanoseconds #4062
fix(otlp-transformer): avoid precision loss when converting HrTime to unix nanoseconds #4062
Conversation
I think this may be a breaking change for the exporter interface? Maybe it's time to start seriously considering a 2.0 release of the SDK since we have quite a few improvements that would be good to make. |
Since the breaking change is in Turns out the amount of tests that needed to be changed was greater than expected, still going over some of them and will try to cut some duplicate code. |
Codecov Report
@@ Coverage Diff @@
## main #4062 +/- ##
==========================================
- Coverage 92.28% 92.24% -0.04%
==========================================
Files 329 331 +2
Lines 9370 9453 +83
Branches 1991 1992 +1
==========================================
+ Hits 8647 8720 +73
- Misses 723 733 +10
|
Additionally:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this may be a breaking change for the exporter interface? Maybe it's time to start seriously considering a 2.0 release of the SDK since we have quite a few improvements that would be good to make.
Since the breaking change is in
otlp-transformer
which is both experimental and used internally, I'd keep things as is with regards to this change.
I agree, this change should only be a problem to direct users of @opentelemetry/otlp-transformer
, and it's documented that this package is only intended for internal use. The exporters are pinned to the exact version, so it's fair to assume that this won't break actual end-users. 🙂
Turns out the amount of tests that needed to be changed was greater than expected, still going over some of them and will try to cut some duplicate code.
Yes, this has been a pain point for a while. They have grown historically and really need some refactoring. 😞
Additionally:
* Removed a bit of dead code in the tests. * Fixed [opentelemetry-exporter-metrics-otlp-proto/test/OTLPMetricExporter.test.ts](https://github.com/open-telemetry/opentelemetry-js/pull/4062/files#diff-712a67b7aa2e82726dce05de9454b18060ed032c83e82a6e9febaa44cba7b1a4) and [opentelemetry-exporter-metrics-otlp-grpc/test/OTLPMetricExporter.test.ts](https://github.com/open-telemetry/opentelemetry-js/pull/4062/files#diff-027af27e5e7182406fd3a331263372057498022868949b788f2349fd77329ef5) to actually test against the ground truth. It was comparing the parsed timestamps against the parsed timestamps. The data was decoded and then cast to a mismatching type, in the case of timestamps the actual parsed data had string timestamps, but this was never reflected in the OTLP types.
Thanks! 🚀 I've added one blocking comment about the code adapted from long.js - then I think this PR is good to go.
Waiting for approval by @dyladan before merging though to ensure all concerns are addressed 🙂
Co-authored-by: Marc Pichler <marc.pichler@dynatrace.com>
Hey @seemk , just a heads up this was a breaking change for us and ingesting open telemetry spans. It caused a little bit of confusion but we did figure out what was going on eventually 😅 We have our own custom receptor written in typescript Are we doing something wrong/unsafe and is there an official package for deserializing spans? |
What does the custom receiver do? Do you have your own protobuf decoder? |
The custom receiver processes open telemetry data and sends it to kinesis. The majority of this data is just JSON although we do support protobuf. I only saw an issue parsing the JSON payloads. I don't believe protobuf was affected. |
@seemk we faced a similar issue, timestamps started coming up as an object
|
@Ankcorn can confirm protobuf wasn't affected |
BTW, there is a detailed report of the breakage in this issue: #4202 |
Hi guys, I'm not sure if this change was meant to have external visibility, but the timestamps being sent by a regular web application are also being sent at JSON objects now breaking certain providers ingressing this data (e.g.. Uptrace). |
Hi guys! I can confirm that this change also broke sending browser events to Honeycomb.io. Their API
|
Which problem is this PR solving?
Fixes #2643
Short description of the changes
Fixes the precision loss when converting
HrTime
to a singular number. The precision loss happened due tohrTimeToNanoseconds
first multiplying seconds by nanosecond digits, causing the initial number to overflow 53 bits, after which it adds nanoseconds to the number, causing additional errors.Instead of using the long.js package, I copied over the relevant parts from this library. long.js does not support CommonJS, only ESM and it seems enabling
esModuleInterop
is not recommended / accepted.Type of change
Checklist: