Skip to content

Commit

Permalink
[cmd/telemetrygen] Use exporter per worker for better metrics through…
Browse files Browse the repository at this point in the history
…put (open-telemetry#27201)

Adding a feature - Use exporter per worker for better metrics throughput

Initially when adding more workers in the telemetrygen config when
running "metrics" it did not increase the metrics throughput since all
workers used the same exporter.

By creating one exporter per worker we can now increase the number of
metrics being send to the backend.

Fixes open-telemetry#26709

- Units tests pass
- Ran local load tests with different configurations

## Before code change

Generate metrics:

```
telemetrygen metrics \
    --metric-type Sum \
    --duration "60s" \
    --rate "0" \
    --workers "10" \
    --otlp-http=false \
    --otlp-endpoint <HOSTNAME> \
    --otlp-attributes "service.name"=\"telemetrygen\"
```

Output:
```
metrics generated	{"worker": 8, "metrics": 139}
metrics generated	{"worker": 0, "metrics": 139}
metrics generated	{"worker": 9, "metrics": 141}
metrics generated	{"worker": 4, "metrics": 140}
metrics generated	{"worker": 2, "metrics": 140}
metrics generated	{"worker": 3, "metrics": 140}
metrics generated	{"worker": 7, "metrics": 140}
metrics generated	{"worker": 5, "metrics": 140}
metrics generated	{"worker": 1, "metrics": 140}
metrics generated	{"worker": 6, "metrics": 140}
```

## After code change

```
telemetrygen metrics \
    --metric-type Sum \
    --duration "60s" \
    --rate "0" \
    --workers "10" \
    --otlp-http=false \
    --otlp-endpoint <HOSTNAME> \
    --otlp-attributes "service.name"=\"telemetrygen\"
```

Output:

```
metrics generated	{"worker": 6, "metrics": 1292}
metrics generated	{"worker": 3, "metrics": 1277}
metrics generated	{"worker": 5, "metrics": 1272}
metrics generated	{"worker": 8, "metrics": 1251}
metrics generated	{"worker": 9, "metrics": 1241}
metrics generated	{"worker": 4, "metrics": 1227}
metrics generated	{"worker": 0, "metrics": 1212}
metrics generated	{"worker": 2, "metrics": 1201}
metrics generated	{"worker": 1, "metrics": 1333}
metrics generated	{"worker": 7, "metrics": 1363}
```

By adding more workers you can now export more metrics and use
`telemetrygen` better for load testing use cases.

With the code change I can now utilize my CPU better for load tests.
When adding 200 workers to the above config the CPU usage can go above
80%. Before that CPU usage would be around 1% with 200 workers.


![image](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/558256/66727e5f-6b0a-44a3-8436-7e6985d6a01c)

---------

Co-authored-by: Alex Boten <aboten@lightstep.com>
  • Loading branch information
2 people authored and RoryCrispin committed Nov 24, 2023
1 parent 92b8b0e commit 979ddea
Show file tree
Hide file tree
Showing 5 changed files with 155 additions and 67 deletions.
25 changes: 25 additions & 0 deletions .chloggen/telemetrygen-add-exporter-per-worker.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: 'enhancement'

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: cmd/telemetrygen

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Use exporter per worker for better metrics throughput

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [26709]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [user]
40 changes: 34 additions & 6 deletions cmd/telemetrygen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,28 +28,44 @@ Check the [`go install` reference](https://go.dev/ref/mod#go-install) to install

First, you'll need an OpenTelemetry Collector to receive the telemetry data. Follow the project's instructions for a detailed setting up guide. The following configuration file should be sufficient:

config.yaml:
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
endpoint: 0.0.0.0:4317

processors:
batch:

exporters:
debug:
verbosity: detailed

service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [debug]
traces:
receivers:
- otlp
processors: []
exporters:
- debug
receivers: [otlp]
processors: [batch]
exporters: [debug]
```
Starting OpenTelemetry collector via docker:
```
docker run -p 4317:4317 -v $(pwd)/config.yaml:/etc/otelcol-contrib/config.yaml ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.86.0
```

Other options for running the collector are documented here https://opentelemetry.io/docs/collector/getting-started/

Once the OpenTelemetry Collector instance is up and running, run `telemetrygen` for your desired telemetry:

### Traces
Expand All @@ -65,3 +81,15 @@ telemetrygen traces --otlp-insecure --traces 1
```

Check `telemetrygen traces --help` for all the options.

### Logs

```console
telemetrygen logs --duration 5s --otlp-insecure
```

### Metrics

```console
telemetrygen metrics --duration 5s --otlp-insecure
```
31 changes: 12 additions & 19 deletions cmd/telemetrygen/internal/metrics/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,26 +29,19 @@ func Start(cfg *Config) error {
}
logger.Info("starting the metrics generator with configuration", zap.Any("config", cfg))

var exp sdkmetric.Exporter
if cfg.UseHTTP {
logger.Info("starting HTTP exporter")
exp, err = otlpmetrichttp.New(context.Background(), httpExporterOptions(cfg)...)
} else {
logger.Info("starting gRPC exporter")
exp, err = otlpmetricgrpc.New(context.Background(), grpcExporterOptions(cfg)...)
}

if err != nil {
return fmt.Errorf("failed to obtain OTLP exporter: %w", err)
}
defer func() {
logger.Info("stopping the exporter")
if tempError := exp.Shutdown(context.Background()); tempError != nil {
logger.Error("failed to stop the exporter", zap.Error(tempError))
expFunc := func() (sdkmetric.Exporter, error) {
var exp sdkmetric.Exporter
if cfg.UseHTTP {
logger.Info("starting HTTP exporter")
exp, err = otlpmetrichttp.New(context.Background(), httpExporterOptions(cfg)...)
} else {
logger.Info("starting gRPC exporter")
exp, err = otlpmetricgrpc.New(context.Background(), grpcExporterOptions(cfg)...)
}
}()
return exp, err
}

if err = Run(cfg, exp, logger); err != nil {
if err = Run(cfg, expFunc, logger); err != nil {
logger.Error("failed to stop the exporter", zap.Error(err))
return err
}
Expand All @@ -57,7 +50,7 @@ func Start(cfg *Config) error {
}

// Run executes the test scenario.
func Run(c *Config, exp sdkmetric.Exporter, logger *zap.Logger) error {
func Run(c *Config, exp func() (sdkmetric.Exporter, error), logger *zap.Logger) error {
if c.TotalDuration > 0 {
c.NumMetrics = 0
} else if c.NumMetrics <= 0 {
Expand Down
15 changes: 14 additions & 1 deletion cmd/telemetrygen/internal/metrics/worker.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,22 @@ type worker struct {
index int // worker index
}

func (w worker) simulateMetrics(res *resource.Resource, exporter sdkmetric.Exporter, signalAttrs []attribute.KeyValue) {
func (w worker) simulateMetrics(res *resource.Resource, exporterFunc func() (sdkmetric.Exporter, error), signalAttrs []attribute.KeyValue) {
limiter := rate.NewLimiter(w.limitPerSecond, 1)

exporter, err := exporterFunc()
if err != nil {
w.logger.Error("failed to create the exporter", zap.Error(err))
return
}

defer func() {
w.logger.Info("stopping the exporter")
if tempError := exporter.Shutdown(context.Background()); tempError != nil {
w.logger.Error("failed to stop the exporter", zap.Error(tempError))
}
}()

var i int64
for w.running.Load() {
var metrics []metricdata.Metrics
Expand Down
Loading

0 comments on commit 979ddea

Please sign in to comment.