-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Splunk HEC trace exporter #182
Add Splunk HEC trace exporter #182
Conversation
351556c
to
14e73b6
Compare
Codecov Report
@@ Coverage Diff @@
## main #182 +/- ##
==========================================
+ Coverage 78.20% 78.48% +0.27%
==========================================
Files 4 4
Lines 78 79 +1
==========================================
+ Hits 61 62 +1
Misses 9 9
Partials 8 8
Continue to review full report at Codecov.
|
52e88b0
to
ac9e94d
Compare
@mxiamxia @wyTrivail can you please review this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thanks!
ac9e94d
to
c92c34f
Compare
@wyTrivail thank you for reviewing. Is there anything else needed to merge this? |
**Description:** - Add Splunk HEC trace exporter to AWS OpenTelemetry distro - Add end to end tests following wiki description - Add exporter to README list **Link to tracking Issue:** n/a **Testing:** - Unit tests in collector contrib repository: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/master/exporter/splunkhecexporter - E2E tests aws-observability/aws-otel-test-framework#140 **Documentation:** - Added exporter to README list - Usage documentation: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/master/exporter/splunkhecexporter/README.md
c92c34f
to
59aa0c6
Compare
@tigrannajaryan thx for reminding!Merging it. btw, did you receive anything from your PM/team that a performance model with splunk real endpoint needs to be provided before AWS Otel Collector release it? |
Thank you for merging @wyTrivail
No, I am afraid I haven't heard anything. I was under the impression that the performance tests that run as part of this repository is all we needed. Do you know who can provide this additional information about the performance tests? |
thx for letting me know @tigrannajaryan. I'm not aware of the POC for that, but i will send an email internally to our PM to ensure the information is reached to every partner. The performance test runs in the repo is using the mocked_server instead of the real endpoint, we had some internal discussion and would like to have partners to do a round of performance test upon real endpoint as that's what we can't do in the repo. I will double check whether the information is communicated correctly. |
@wyTrivail ok, thanks for checking. I look forward to hear more about this. Note that performance tests in official OpenTelemetry Collector repository use use real protocol implementations, so they are more realistic. Could you perhaps base your performance test requirements on this? |
Thx for the suggestion! i will look into it. When we say "real protocol implementation", i reckon you mean the mocked server has validate the data structure of the metric/trace? Right now we are still exploring the proper way to have a realistic mocked_server which could give us the "real" performance data. From exporter view, I'd prefer to simulate the real latency, throttling and other possible factor to impact the performance of collector. btw, this just reminded me one more thing you might want to take a look. the sapm exporter was failing to pass the negative soaking test which route the endpoint to a "bad/invalid" endpoint, the memory of collector went to around 4GB. below is the error message:
It will be great if you can take a look, to me looks like this is an issue about backfilling. |
Yes. Collector testbed uses mocked servers, which however fully implement the receiving side of the corresponding protocol and return the expected responses to the Collector. From the perspective of testing of the performance and functionality of the Collector the testbed is very close to using the actual backend to send data to.
Do you use memory limiter processor in the soak test's Collector config? If not then the memory can grow up to the configured sending queue size (default is 5000 requests). Queue limits are based on the number of request so memory usage in bytes is hard to estimate without knowing the data composition of the request. 4GB may be expected if there is no memory limiter and requests are large. |
@tigrannajaryan thank you for the suggestions and sorry for my late response! For the performance test, i added a delay(15ms) in the response of mock server, I can see the cpu and memory of collector became bigger than before, and it's almost the same performance result comparing with the real endpoint performance test we've done. For the memory limiter, We didn't use memory limiter before. I tried it yesterday, the memory became stable and collector started to drop data. I think it is a more realistic user case, i will add memory limiter into our testing. Also thanks again for this suggestion, i will also update our example to guide customers to use it in production. |
@wyTrivail sounds good. BTW, I am working on improving memory limiter to use less CPU. If you notice high CPU usage you may need to update the dependency once open-telemetry/opentelemetry-collector#2250 is merged. |
@tigrannajaryan thx, will check it and let you know if any anomaly. |
Description:
Link to tracking Issue: n/a
Testing:
Documentation: