Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DistributedContext implementation and Delete ITag* Types #366

Merged
merged 5 commits into from
Dec 6, 2019
Merged

Add DistributedContext implementation and Delete ITag* Types #366

merged 5 commits into from
Dec 6, 2019

Conversation

tarekgh
Copy link
Contributor

@tarekgh tarekgh commented Dec 2, 2019

This change includes the following:

  • Add the missing implementation of the DistributedContext.
  • Change DistributedContextEntry to a struct to avoid allocations inside the DistributedContext.
  • Introducing DistributedContext carrier which will carry the current context during the execution
    • By default, DistributedContext will have Noop carrier for optimizing when the SDK is not loaded.
    • When SDK loaded, it should set the carrier to the async local carrier defined in the SDK
  • Removed all Tag* and ITag* types which not needed as DistributedContext should cover the required scenarios.
  • Modified the serialization to work with the DistributedContext.
  • Cleaned up the tests and added more DistributedContext specific tests.

This change includes the following:
- Add the missing implementation of the DistributedContext.
- Change DistributedContextEntry to a struct to avoid allocations inside the DistributedContext.
- Introducing DistributedContext carrier which will carry the current context during the execution
    - By default, DistributedContext will have Noop carrier for optimizing when the SDK is not loaded.
    - When SDK loaded, it should set the carrier to the async local carrier defined in the SDK
-	Removed all Tag* and ITag* types which not needed as DistributedContext should cover the required scenarios.
-	Modified the serialization to work with the DistributedContext.
-	Cleaned up the tests and added more DistributedContext specific tests.
@tarekgh
Copy link
Contributor Author

tarekgh commented Dec 2, 2019

@SergeyKanzhelev I'll try to follow up with the comments on this PR. I just want to say, there is opportunities to optimize more to provide more constructors overload to DistributedContext to have the caller avoid creating any collection. We may look at that later.

@tarekgh
Copy link
Contributor Author

tarekgh commented Dec 2, 2019

I'll look at the failing tests

@tarekgh
Copy link
Contributor Author

tarekgh commented Dec 2, 2019

@lmolkova @SergeyKanzhelev do you think the failing test is related to my changes here?

https://dev.azure.com/opentelemetry/pipelines/_build/results?buildId=915

[xUnit.net 00:00:03.54]     OpenTelemetry.Trace.Export.Test.BatchingSpanProcessorTest.ProcessorDoesNotBlockOnExporter [FAIL]
  X OpenTelemetry.Trace.Export.Test.BatchingSpanProcessorTest.ProcessorDoesNotBlockOnExporter [1s 17ms]
  Error Message:
   Expected at least 1, got 0
Expected: True
Actual:   False
  Stack Trace:
     at OpenTelemetry.Trace.Export.Test.BatchingSpanProcessorTest.WaitForSpans(TestExporter exporter, Int32 spanCount, TimeSpan timeout) in D:\a\1\s\test\OpenTelemetry.Tests\Impl\Trace\Export\BatchingSpanProcessorTests.cs:line 341
   at OpenTelemetry.Trace.Export.Test.BatchingSpanProcessorTest.ProcessorDoesNotBlockOnExporter() in D:\a\1\s\test\OpenTelemetry.Tests\Impl\Trace\Export\BatchingSpanProcessorTests.cs:line 243
Results File: D:\a\_temp\VssAdministrator_fv-az170_2019-12-02_19_14_55.trx
Test Run Failed.

@lmolkova
Copy link

lmolkova commented Dec 2, 2019

@tarekgh I don't think test failures are related to your change. I guess it's still this issue #344. Do you have permission to re-run build?

@tarekgh
Copy link
Contributor Author

tarekgh commented Dec 2, 2019

Do you have permission to re-run build?

I don't :-(

@lmolkova
Copy link

lmolkova commented Dec 2, 2019

I've restarted the pipeline. You could also close and reopen PR, it re-triggers pipelines.

Copy link
Member

@SergeyKanzhelev SergeyKanzhelev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly questions

@tarekgh
Copy link
Contributor Author

tarekgh commented Dec 4, 2019

@SergeyKanzhelev I have addressed some of your feedback.

Regarding filtering the duplicate entries in the DistributedContext construction, I have kept the filtering but optimized more. The reason is not filtering will add more complexity in other functionality like Equality. Also, if we didn't filter it we'll have to filter every time the context get serialized which not good to do that for the same context every time you serialize it. Let's keep the filtering for now and we can optimize more later if needed.
I'll wait your answers for the other opened comments you raised.

Thanks for the feedback.

@SergeyKanzhelev
Copy link
Member

@tarekgh I appreciate you working on your vacation. I tend to accept this PR and follow up on changes later. @open-telemetry/dotnet-approvers any objections?

@MikeGoldsmith
Copy link
Member

I think this is good to accept.

Copy link
Member

@SergeyKanzhelev SergeyKanzhelev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll create issues on unresolved comments

@SergeyKanzhelev SergeyKanzhelev merged commit 8e8a723 into open-telemetry:master Dec 6, 2019
@tarekgh tarekgh deleted the DistributedContextImpl branch August 31, 2022 15:46
Yun-Ting pushed a commit to Yun-Ting/opentelemetry-dotnet that referenced this pull request Oct 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants