forked from open-telemetry/opentelemetry-collector-contrib
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge from main #1
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
**Documentation:** <Describe the documentation added.> Found a typo while reading the transform processor readme.
…#27272) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> Added support for Exporter Helper configuration. **Link to tracking Issue:** #24329 **Testing:** <Describe what testing was performed and which tests were added.> Added tests and manually tested with e2e scenarios --------- Co-authored-by: Ramachandran A G <ramacg@microsoft.com> Co-authored-by: Ziqi Zhao <zhaoziqi9146@gmail.com> Co-authored-by: Ramachandran A G <106139410+ag-ramachandran@users.noreply.github.com>
#27485) **Description:** <Describe what has changed.> add k8s.pod.qos_class optional resource attriute **Link to tracking Issue:** <Issue number if applicable> #27483 **Testing:** <Describe what testing was performed and which tests were added.> - updated unit tests **Documentation:** <Describe the documentation added.> - generated
**Description:** This component will export markers to be consumed by the Honeycomb Markers API to highlight user events based initially on preset configurations. **Link to tracking Issue:** #26653 **Testing:** Unit testing for factory and config **Documentation:** README describing component usage --------- Co-authored-by: Tyler Helmuth <12352919+TylerHelmuth@users.noreply.github.com>
**Description:** It's not obvious from the readme that you won't get all metrics listed in metadata by default. Suggest small doc update to make it clearer. **Link to tracking Issue:** **Testing:** Observed with running collector. **Documentation:** See above. --------- Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com> Co-authored-by: Curtis Robert <92119472+crobert-1@users.noreply.github.com>
2nd step for the deprecation of `container.cpu.percent` According to the deprecation plan in the [docs](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.79.0/receiver/dockerstatsreceiver#transition-to-cpu-utilization-metric-name-aligned-with-opentelemetry-specification), this PR disables the old metric by default, to be released in v0.83.0 tracking issue: #21807 --------- Co-authored-by: Christian <calvarez@newrelic.com>
…27646) **Description:** Fix mask when multiple patterns exist With following input: ``` attr: <secret1> <secret2> ``` and config: ``` redaction: blocked_values: - '<secret1>' - '<secret2>' ``` Output before fix: ``` attr: <secret1> **** ``` Output after fix: ``` attr: **** **** ``` **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> **Documentation:** <Describe the documentation added.>
GHSA-qppj-fm5r-hxr3 https://github.com/grpc/grpc-go/releases/tag/v1.58.3 --------- Signed-off-by: Pavol Loffay <p.loffay@gmail.com>
Resolves #27640 --------- Co-authored-by: Paulo Janotti <pjanotti@splunk.com>
…27459) Description: Exposes bbolt fsync as a configuration option Link to tracking Issue: [20266](#20266) Testing: Manual Testing, Updated unit tests for factory and client Documentation: Added change-log and documentation comments in config.go --------- Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
**Description:** Cache the publisher event to: 1. Avoid logging the same error message every time one event from the given source is logged. 2. Avoid opening and closing the event publisher for every single event. **Link to tracking Issue:** [Item 4 described on the investigation](#21491 (comment)) for issue #21491. **Testing:** * Go tests for `pkg/stanza` and `receiver/windowseventlogreceiver` on Windows box. * Ran the contrib build locally to validate the change. * Can't run the full make locally: misspell is failing on Windows because the command line is too long. **Documentation:** Let me know if changing the severity of the log message requires a changelog update.
**Description:** Adding myself as owner of `windowseventlogreceiver` per invite #27658 (comment) cc @djaglowski
…tings.enabled flag (#27592) **Description:** Previously the remote write exporter would incorrectly retry if `retrySettings.enabled` was set to false. **Testing:** Unit tests
**Description:** The `system` detector extracts all the `cpu` info from the system even if you disable the configs and I believe this is where the bug kicks in. Disabling the settings will only stop it from setting the resource attributes. The [library](https://github.com/shirou/gopsutil/blob/v3.23.9/cpu/cpu_windows.go#L113) that we rely on doesn't extract some attributes for Windows OS (in this case, the field `cpu.Model`) and it leaves this field empty. This results in a bug when we try to parse an empty string. The long-term fix will be to extract `cpu.Model` in `gopsutil` upstream library. **Link to tracking Issue:** #27675
Related issue: #20552 Tweak the mock-backend to do following: - Receives data from the receiver. - Returns errors randomly to our receiver, which attempts to resend/drop the data. This is helpful when we're required to test random behaviors of the collector and ensure reliable data delivery. This is my initial PR to expand the testbed. This will help my further efforts to expand the testbed. Myself and @omrozowicz-splunk plan on adding `sending_queue` support to the testbed and expanding the testing capabilities. --------- Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
**Description:** Adding integration tests for syslog exporter (and syslog receiver) and fixing bugs which has been found during the process **Link to tracking Issue:** #21245 **Testing:** Integration tests and more unit tests **Documentation:** N/A --------- Signed-off-by: Dominik Rosiek <drosiek@sumologic.com> Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
**Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> Logs and traces are not supported telemetry types for the file receiver, and the collector will fail to start if it the receiver is in included in a pipeline of either of these types. This change is to correct the README to properly reflect this. **Link to tracking Issue:** <Issue number if applicable> Resolves #27682 **Testing:** <Describe what testing was performed and which tests were added.> **Documentation:** <Describe the documentation added.>
running `make gengithub` or `make update-codeowners` with an incorrect token or without setting the token throws the same following error: ```https://api.github.com/orgs/open-telemetry/members?per_page=50: 401 Bad credentials []``` It makes sense to throw an error if the user has forgotten to set the GITHUB_TOKEN variable and it also distinguishes between the two cases (incorrect token and token not set)
This has been causing ambiguous imports all over the place for some time. Signed-off-by: Alex Boten <aboten@lightstep.com>
Refactored parts of the Splunk Enterprise receiver to better leverage the pre-existing otel SDK. This PR also updates the README to be a more informative document. [27026](#27026) Unit testing is included and updated to accommodate the new refactor.
**Description:** The current SignalFx exporter maps to a static user agent string "OpenTelemetry-Collector SignalFx Exporter/v0.0.1". This PR changes the version to match the build info version. **Link to tracking Issue:** Fixes #16841
Fixes some incorrect wording in the contributing documentation
Reuse the byte buffer used when encoding metrics to HEC events JSON.
Bump github.com/DataDog/datadog-api-client-go/v2 from 2.17.0 to 2.18.0 in /exporter/datadogexporter Bump github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp from 1.19.1 to 1.20.0 in /exporter/datadogexporter Bump github.com/aliyun/aliyun-log-go-sdk from 0.1.60 to 0.1.63 in /exporter/alibabacloudlogserviceexporter Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.25 in /exporter/datadogexporter Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.25 in /internal/aws/awsutil Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.25 in /internal/aws/xray Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.25 in /processor/resourcedetectionprocessor Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /pkg/translator/opencensus Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /processor/resourcedetectionprocessor Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /receiver/activedirectorydsreceiver Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /receiver/elasticsearchreceiver Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /receiver/sqlserverreceiver Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /receiver/vcenterreceiver Bump github.com/klauspost/compress from 1.17.0 to 1.17.1 in /exporter/fileexporter Bump github.com/prometheus/prometheus from 0.47.1 to 0.47.2 in /exporter/prometheusexporter Bump github.com/prometheus/prometheus from 0.47.1 to 0.47.2 in /pkg/translator/prometheusremotewrite Bump github.com/prometheus/prometheus from 0.47.1 to 0.47.2 in /testbed Bump google.golang.org/api from 0.146.0 to 0.147.0 in /receiver/googlecloudpubsubreceiver
Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob from 1.1.0 to 1.2.0 in /receiver/azureblobreceiver Bump github.com/ClickHouse/clickhouse-go/v2 from 2.14.2 to 2.14.3 in /exporter/clickhouseexporter Bump github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp from 1.19.1 to 1.20.0 in /exporter/datadogexporter Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.26 in /exporter/awscloudwatchlogsexporter Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.26 in /internal/aws/cwlogs Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.26 in /internal/aws/k8s Bump github.com/aws/aws-sdk-go from 1.45.24 to 1.45.26 in /receiver/awsxrayreceiver Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /exporter/awsemfexporter Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /exporter/awsxrayexporter Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /exporter/datadogexporter Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /extension/observer/ecsobserver Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /internal/aws/awsutil Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /internal/aws/proxy Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /internal/aws/xray Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /internal/aws/xray/testdata/sampleapp Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /internal/metadataproviders Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /processor/resourcedetectionprocessor Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /receiver/awscontainerinsightreceiver Bump github.com/aws/aws-sdk-go from 1.45.25 to 1.45.26 in /receiver/awsecscontainermetricsreceiver Bump github.com/aws/aws-sdk-go-v2 from 1.21.1 to 1.21.2 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2 from 1.21.1 to 1.21.2 in /extension/sigv4authextension Bump github.com/aws/aws-sdk-go-v2/config from 1.18.44 to 1.19.0 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/config from 1.18.44 to 1.19.0 in /extension/sigv4authextension Bump github.com/aws/aws-sdk-go-v2/credentials from 1.13.42 to 1.13.43 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/credentials from 1.13.42 to 1.13.43 in /extension/sigv4authextension Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.19.1 to 1.19.2 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.23.1 to 1.23.2 in /exporter/awskinesisexporter Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.23.1 to 1.23.2 in /extension/sigv4authextension Bump github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common from 1.0.762 to 1.0.766 in /exporter/tencentcloudlogserviceexporter Bump go.mongodb.org/atlas from 0.33.0 to 0.34.0 in /receiver/mongodbatlasreceiver
…7647) **Description:** adding a feature - Adding asynchronous & concurrency mode to the UDP receiver/stanza input operator - goal is to reduce UDP packet loss in high-scale scenarios. Added 'async' block that holds 'FixedAReaderRoutineCount' field - it determines how many concurrent readers will read from the UDP port, process logs, and send them downstream. **Link to tracking Issue:** 27613 **Testing:** Local stress tests ran all types of config (no 'async', with empty 'async', with 'async' that contains FixedAReaderRoutineCount=2). In repo, added single test to udp_test, config_test (in stanza udp operator), and udp_test (in udplogreceiver). **Documentation:** Updated md file for both udplogreceiver & stanza udp_input operator with the new flags. --------- Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
) change resourcequota and clusterquota metrics to use {resource} units. **Link to tracking Issue:** #10553
The Prometheus Remote write exporter is missing the details of default values for the remote write queue config. Added the values after looking into the code for the same.
This change adds the "exporter.datadogexporter.disable_apm_stats" feature flag, which can be enabled to disable APM stats computation. Updates #28615
I came across `zipkinreceiver` and observed we don't follow the receiver [contract](https://github.com/open-telemetry/opentelemetry-collector/blob/b2961b799e2c1ec128f0539764af1fa10c839e04/receiver/doc.go#L21). We return `InternalServerError` straight away without checking permanent/non-permanent errors. We should probably return BadRequest in case of permanent errors open-telemetry/opentelemetry-collector#4335 **Testing:** Added test cases Co-authored-by: Andrzej Stencel <astencel@sumologic.com>
…ead of using export function (#27259) **Description:** Wavefrontreceiver is very similar to carbonreceiver: it is TCP based in which each received text line represents a single metric data point. In order to avoid using exported function `carbonreceiver.New(...)`, we can wrap metrics receiver under carbon receiver. **Link to tracking Issue:** #27248 **Testing:** make chlog-validate go test for wavefrontreceiver **Documentation:** --------- Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
**Description:** Rename remoteobserverprocessor to remotetapprocessor **Link to tracking Issue:** Fixes #27873
**Description:** We don't have exemplars added to Sum metrics right now. This PR provides an enhancement to add exemplars to Sum metrics in Spanmetrics connector **Testing:** Added unit tests and also tested it in our local environment.
Regenerate codeowners with `make gengithub`
**Description:** Factory implementation of Alertmanager Exporter Initial PR - base configs and factory implementation **Link to tracking Issue:** [#23659](#23569) **Testing:** Unit tests for config and factory implementation **Documentation:** Readme and Sample Configs to use Alertmanager exporter --------- Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de> Co-authored-by: Juraci Paixão Kröhling <juraci@kroehling.de>
…26115) **Description:** Adds a bounded duration sampling processor, distinct from the existing latency one in that it has both lower and upper bounds Apologies for this appearing as a pull request out of nothing, my intent had actually been to create a review area against my own fork and raise an issue asking if you'd accept the PR. I think the need here is pretty obvious from the context, though I think it's easy to imagine preferring this to be a change to the existing processor. I raised as a new one as I thought it might make existing behavior cleaner to retain. **Link to tracking Issue:** As above this is a bit of a premature PR since I intended to raise as an issue, and thus there isn't one, but I think it's easy enough to deal with here so leaving open for now and have learned GitHub's ways for the future (I rarely use github). **Testing:** New module so associated tests are added showing all relevant behavior, and passing. **Documentation:** Updated README and example config --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…rter (#28863) * Link to related GCP docs * Clarify mention of "traces" * Drop mention of PromQL support as a difference from `googlecloud` exporter
**Description:** <Describe what has changed.> * Adds a new `mtime` sort type, which will sort files by their modified time * Add a feature gate for `mtime` sort type An optional follow-up performance improvement may be made here, to have the finder return fs.DirEntry directly to query the mtime without making an extra call to os.Stat for each file. **Link to tracking Issue:** #27812 **Testing:** * Added unit tests for new functionality **Documentation:** * Added new `mode` parameter to filelogreceiver docs
**Description:** A part of #28693 <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> move`skywalking_to_traces` in `skywalkingreceiver` into `pkg/translator/skywalking` **Link to tracking Issue:** <Issue number if applicable> **Testing:** <Describe what testing was performed and which tests were added.> **Documentation:** <Describe the documentation added.> --------- Signed-off-by: Jared Tan <jian.tan@daocloud.io>
#28836) **Description:** Update README about disabling the feature gate of native metric client and falling back to Zorkian client. --------- Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
…put (#27201) Adding a feature - Use exporter per worker for better metrics throughput Initially when adding more workers in the telemetrygen config when running "metrics" it did not increase the metrics throughput since all workers used the same exporter. By creating one exporter per worker we can now increase the number of metrics being send to the backend. Fixes #26709 - Units tests pass - Ran local load tests with different configurations ## Before code change Generate metrics: ``` telemetrygen metrics \ --metric-type Sum \ --duration "60s" \ --rate "0" \ --workers "10" \ --otlp-http=false \ --otlp-endpoint <HOSTNAME> \ --otlp-attributes "service.name"=\"telemetrygen\" ``` Output: ``` metrics generated {"worker": 8, "metrics": 139} metrics generated {"worker": 0, "metrics": 139} metrics generated {"worker": 9, "metrics": 141} metrics generated {"worker": 4, "metrics": 140} metrics generated {"worker": 2, "metrics": 140} metrics generated {"worker": 3, "metrics": 140} metrics generated {"worker": 7, "metrics": 140} metrics generated {"worker": 5, "metrics": 140} metrics generated {"worker": 1, "metrics": 140} metrics generated {"worker": 6, "metrics": 140} ``` ## After code change ``` telemetrygen metrics \ --metric-type Sum \ --duration "60s" \ --rate "0" \ --workers "10" \ --otlp-http=false \ --otlp-endpoint <HOSTNAME> \ --otlp-attributes "service.name"=\"telemetrygen\" ``` Output: ``` metrics generated {"worker": 6, "metrics": 1292} metrics generated {"worker": 3, "metrics": 1277} metrics generated {"worker": 5, "metrics": 1272} metrics generated {"worker": 8, "metrics": 1251} metrics generated {"worker": 9, "metrics": 1241} metrics generated {"worker": 4, "metrics": 1227} metrics generated {"worker": 0, "metrics": 1212} metrics generated {"worker": 2, "metrics": 1201} metrics generated {"worker": 1, "metrics": 1333} metrics generated {"worker": 7, "metrics": 1363} ``` By adding more workers you can now export more metrics and use `telemetrygen` better for load testing use cases. With the code change I can now utilize my CPU better for load tests. When adding 200 workers to the above config the CPU usage can go above 80%. Before that CPU usage would be around 1% with 200 workers. ![image](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/558256/66727e5f-6b0a-44a3-8436-7e6985d6a01c) --------- Co-authored-by: Alex Boten <aboten@lightstep.com>
…s on windows and darwin (#28864) **Description:** There were some issues related to how `mock.On` works. With default mock and addition `On` which is already present it appends to a list and won't be called as one instance of a method is already there. So some expectations regarding return values were not met Metrics count for darwin is 3 because disk io is disabled [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/f509060a8d1ab5ca4b5827e0c60d1149e3059908/receiver/hostmetricsreceiver/internal/scraper/processscraper/process_scraper.go#L315) Tested locally on mac, windows 11 and ubuntu 22 **Link to tracking Issue:** #28828
) **Description:** Do not use export function `carbonreceiver.New` and replace with `factory.CreateMetricsReceiver`, then we can chore carbonreceiver to make it pass checkapi tool. **Link to tracking Issue:** #28857
To fix failing `build-and-test / checks` CI job
Run `make gengithub` locally.
…r Exporter (#28854) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> This pull request introduces the ability to configure the Azure Monitor Exporter using a connection string, aligning the exporter configuration with Azure Monitor's recommended practices. The current implementation requires users to set the instrumentation key directly, which will soon be deprecated in favor of using the connection string. **Changes Made:** 1. Configuration Update: Modified the `Config` struct and related configuration parsing logic to support a `ConnectionString` field. 2. Parsing Logic: Implemented functionality to parse the connection string and extract necessary details, such as `InstrumentationKey` and `IngestionEndpoint`. 3. Updated Tests: Revised existing tests and added new ones to ensure coverage of the new configuration option. **Benefits:** * Streamlines the configuration process for end-users. * Aligns with Azure Monitor's best practices and recommended configuration approach. * Paves the way for the upcoming deprecation of direct instrumentation key configuration. **Backwards Compatibility:** This update maintains full backwards compatibility. Users currently utilizing the instrumentation key for configuration can continue to do so but are advised to transition to using the connection string. **To-Do** * Documentation Update in a follow up PR * Deprecation Notice: A future update will introduce a deprecation warning for users still configuring the exporter with the instrumentation key, encouraging them to switch to using a connection string. * Add support for `EndpointSuffix` in connection string - https://learn.microsoft.com/en-us/azure/azure-monitor/app/sdk-connection-string?tabs=dotnet5#connection-string-with-an-endpoint-suffix **Link to tracking Issue:** <Issue number if applicable> #28853 **Testing:** <Describe what testing was performed and which tests were added.> Conducted comprehensive testing, including unit tests, to validate that the new configuration option works as expected and does not introduce regressions. All tests are currently passing. ``` [Wed Nov 1 12:53:42 PDT 2023] --------- Transmitting 27 items --------- [Wed Nov 1 12:53:43 PDT 2023] Telemetry transmitted in 331.926261ms [Wed Nov 1 12:53:43 PDT 2023] Response: 200 [Wed Nov 1 12:53:43 PDT 2023] Items accepted/received: 27/27 [Wed Nov 1 12:53:53 PDT 2023] --------- Transmitting 30 items --------- [Wed Nov 1 12:53:53 PDT 2023] Telemetry transmitted in 73.171392ms [Wed Nov 1 12:53:53 PDT 2023] Response: 200 [Wed Nov 1 12:53:53 PDT 2023] Items accepted/received: 30/30 [Wed Nov 1 12:54:04 PDT 2023] --------- Transmitting 27 items --------- [Wed Nov 1 12:54:04 PDT 2023] Telemetry transmitted in 68.037724ms [Wed Nov 1 12:54:04 PDT 2023] Response: 200 [Wed Nov 1 12:54:04 PDT 2023] Items accepted/received: 27/27 ``` **Documentation:** <Describe the documentation added.> TODO, in a follow up PR.
…and strip prefix `AWS.SDK.` from aws remote service name (#27232) **Description:** <Describe what has changed.> <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> - Convert individual HTTP error events into exceptions within subsegments for AWS SDK spans. - Normalize the service name from `awsxray.AWSServiceAttribute` attribute by removing the `AWS.SDK.` prefix (in some aws sdk instrumentation, we have added the prefix to produce metrics with the prefix to clearly indicate the resource). This change ensures that X-Ray backend recognizes standard service names like "DynamoDb", "S3", etc., enabling correct identification of AWS service types. **Link to tracking Issue:** NA **Testing:** Unit tests are added. **Documentation:** NA --------- Co-authored-by: John Knollmeyer <jknollm@amazon.com> Co-authored-by: John Knollmeyer <jaknollmeyer@gmail.com>
Do not export function `New` and pass checkapi. #26304 Signed-off-by: sakulali <sakulali@126.com>
Rather than importing a deprecated module, this embeds the contents of that module in the testbed. Part of #28647 Signed-off-by: Alex Boten <aboten@lightstep.com>
**Description:** I failed to reproduce []uint8 to int64 conversion but I was able to repro float64 to int64 conversion error. Different types may be due to different versions or values reported. The fix is forcing query to retrieve integer values. While this may seem like most obvious fix I'm not really aligned with it. What query is returning for is a lag as a decimal number (whole part is seconds) by forcing this to return just an int we kind of losing precision. `0.4s` are reported as `0` while it is `400ms`. My proposal here would consists of 2 options. First one is change reporting in a way that what we report is in fact time-span in `ms`. This could most likely be considered breaking. Second option (I'm more in favor of) is to change the type of what is reported (from int to float). This way unit is intact and does not break possible visualizations, but we gain precision and won't lose data. My first issue here so I wanted to get some feedback first before publishing something unreasonable. _EDIT_ Went with the option of deprecating metrics with second precision (still fixing conversion failures) and introducing alternative to these metrics with `_ms` suffix in name and millisecond precision. Old metrics are now behind a featuregate which is enabled by default for now. **Link to tracking Issue:** #26714 **Testing:** Setting up replicated postgres instances and testing method against this deployment. **Documentation:** - --------- Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
**Description:** Logging was broken after #25900 (released in v0.84.0). It is fixed by open-telemetry/opentelemetry-collector#8792, which will be released in v0.89.0. This will help with any distributions that include the googlecloud exporter components.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description:
Link to tracking Issue:
Testing:
Documentation: