Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perf test: TestTrace10kSPS/OpenCensus CPU consumption is 40.3%, max expected is 35% #89

Closed
pjanotti opened this issue Jan 3, 2020 · 2 comments

Comments

@pjanotti
Copy link
Contributor

pjanotti commented Jan 3, 2020

CI failure for PR #88

--- FAIL: TestTrace10kSPS (22.32s)
    --- FAIL: TestTrace10kSPS/OpenCensus (6.00s)
        test_case.go:354: CPU consumption is 40.3%, max expected is 35%
    --- PASS: TestTrace10kSPS/SAPM (16.32s)
FAIL
exit status 1
FAIL	github.com/open-telemetry/opentelemetry-collector-contrib/testbed/tests	54.980s
# Test Results
Started: Fri, 03 Jan 2020 22:23:23 +0000

Test                                    |Result|Duration|CPU Avg%|CPU Max%|RAM Avg MiB|RAM Max MiB|Sent Items|Received Items|
----------------------------------------|------|-------:|-------:|-------:|----------:|----------:|---------:|-------------:|
Metric10kDPS/SignalFx                   |PASS  |     15s|    20.0|    20.7|         36|         45|    150000|        150000|
Metric10kDPS/OpenCensus                 |PASS  |     18s|     7.5|     8.0|         42|         52|    149900|        149900|
Trace10kSPS/OpenCensus                  |FAIL  |      6s|    32.5|    40.3|         39|         59|     59660|         57400|CPU consumption is 40.3%, max expected is 35%
Trace10kSPS/SAPM                        |PASS  |     16s|    43.7|    53.0|         69|         88|    149590|        149590|

@pjanotti pjanotti changed the title Perf test: failed TestTrace10kSPS/OpenCensus CPU consumption is 40.3%, max expected is 35% Perf test: TestTrace10kSPS/OpenCensus CPU consumption is 40.3%, max expected is 35% Jan 3, 2020
@tigrannajaryan
Copy link
Member

Obviously the PR made unrelated changes. The failure is most likely due to variance of performance of the machine where the build is executed. Since the build is not done in a controlled environment this is not unexpected.

We may need to address this by having a machine in a controlled environment with predictable and stable resource and running performance tests only in this controlled env. However this is a future work, for now I suggest that we live with occasional failure of perf tests and re-run them to ensure they are a transient glitch.

tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-collector-contrib that referenced this issue Jan 6, 2020
Perf tests are running on uncontrolled CI environment. We need some margin
to ensure they pass.

TODO: have a better solution with more controlled env for perf tests:
open-telemetry#89
tigrannajaryan added a commit that referenced this issue Jan 7, 2020
Perf tests are running on uncontrolled CI environment. We need some margin
to ensure they pass.

TODO: have a better solution with more controlled env for perf tests:
#89
@bogdandrutu
Copy link
Member

No longer an issue.

mxiamxia referenced this issue in mxiamxia/opentelemetry-collector-contrib Jul 22, 2020
Github issue: open-telemetry/opentelemetry-collector#38

Testing done:

- make && make otelsvc
- Run otelsvc with the following config and make sure Prometheus can scrape :

receivers:
  opencensus:
    port: 55678

  prometheus:
    config:
      scrape_configs:
        - job_name: 'demo'
          scrape_interval: 5s

zpages:
  port: 55679

exporters:
  prometheus:
    endpoint: "127.0.0.1:8889"

pipelines:
  metrics:
    receivers: [prometheus]
    exporters: [prometheus]
ljmsc referenced this issue in ljmsc/opentelemetry-collector-contrib Feb 21, 2022
This is to make tag.Map an immutable type, so it is safe to use
concurrently. The safety is not yet fully achieved because of the
functions returning contents of the map (Value and Foreach). The
functions give callers an access to core.Value objects, which contain
a byte slice, which has pointer like semantics. So to avoid accidental
changes, we will need to copy the value if it is of BYTES type.

Fixes #59
jj22ee pushed a commit to jj22ee/opentelemetry-collector-contrib that referenced this issue Sep 21, 2023
…99/merge-aws-cwa-dev

Merge aws-cwa-dev into aws-cwa-apm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants