Skip to content

Commit

Permalink
Cleanup documents that refer to queued_retry processor (#2496)
Browse files Browse the repository at this point in the history
* Cleanup documents that refer to queued_retry processor

Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>

* Update docs/design.md

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>

* Update docs/design.md

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
  • Loading branch information
bogdandrutu and tigrannajaryan authored Feb 16, 2021
1 parent 211538e commit 846b971
Show file tree
Hide file tree
Showing 3 changed files with 33 additions and 28 deletions.
41 changes: 23 additions & 18 deletions docs/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ A pipeline configuration typically looks like this:
service:
pipelines: # section that can contain multiple subsections, one per pipeline
traces: # type of the pipeline
receivers: [opencensus, jaeger, zipkin]
processors: [tags, tail_sampling, batch, queued_retry]
exporters: [opencensus, jaeger, stackdriver, zipkin]
receivers: [otlp, jaeger, zipkin]
processors: [memory_limiter, batch]
exporters: [otlp, jaeger, zipkin]
```
The above example defines a pipeline for “traces” type of telemetry data, with 3 receivers, 4 processors and 4 exporters.
The above example defines a pipeline for “traces” type of telemetry data, with 3 receivers, 2 processors and 3 exporters.
For details of config file format see [this document](https://docs.google.com/document/d/1NeheFG7DmcUYo_h2vLtNRlia9x5wOJMlV4QKEK05FhQ/edit#).
Expand All @@ -51,7 +51,7 @@ service:
pipelines:
traces: # a pipeline of “traces” type
receivers: [opencensus]
processors: [tags, tail_sampling, batch, queued_retry]
processors: [memory_limiter, batch]
exporters: [jaeger]
traces/2: # another pipeline of “traces” type
receivers: [opencensus]
Expand All @@ -63,6 +63,8 @@ In the above example “opencensus” receiver will send the same data to pipeli

When the Collector loads this config the result will look like this (part of processors and exporters are omitted from the diagram for brevity):

**TODO** Update picture and replace `"tags" processor` with `"memory_limiter" processor`

![Receivers](images/design-receivers.png)

Important: when the same receiver is referenced in more than one pipeline the Collector will create only one receiver instance at runtime that will send the data to `FanOutConnector` which in turn will send the data to the first processor of each pipeline. The data propagation from receiver to `FanOutConnector` and then to processors is via synchronous function call. This means that if one processor blocks the call the other pipelines that are attached to this receiver will be blocked from receiving the same data and the receiver itself will stop processing and forwarding newly received data.
Expand Down Expand Up @@ -94,16 +96,18 @@ service:
pipelines:
traces: # a pipeline of “traces” type
receivers: [zipkin]
processors: [tags, tail_sampling, batch, queued_retry]
processors: [memory_limiter]
exporters: [jaeger]
traces/2: # another pipeline of “traces” type
receivers: [opencensus]
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
```

In the above example “jaeger” exporter will get data from pipeline “traces” and from pipeline “traces/2”. When the Collector loads this config the result will look like this (part of processors and receivers are omitted from the diagram for brevity):

**TODO** Update picture and replace `"queued-retry" processor` with `"memory_limiter" processor`

![Exporters](images/design-exporters.png)

### Processors
Expand All @@ -112,32 +116,33 @@ A pipeline can contain sequentially connected processors. The first processor ge

Processors can transform the data before forwarding it (i.e. add or remove attributes from spans), they can drop the data simply by deciding not to forward it (this is for example how “sampling” processor works), they can also generate new data (this is how for example how a “persistent-queue” processor can work after Collector restarts by reading previously saved data from a local file and forwarding it on the pipeline).

The same name of the processor can be referenced in the “processors” key of multiple pipelines. In this case the same configuration will be used for each of these processors however each pipeline will always gets its own instance of the processor. Each of these processors will have its own state, the processors are never shared between pipelines. For example if “queued_retry” processor is used several pipelines each pipeline will have its own queue (although the queues will be configured exactly the same way if the reference the same key in the config file). As an example, given the following config:
The same name of the processor can be referenced in the “processors” key of multiple pipelines. In this case the same configuration will be used for each of these processors however each pipeline will always gets its own instance of the processor. Each of these processors will have its own state, the processors are never shared between pipelines. For example if “batch” processor is used in several pipelines each pipeline will have its own batch processor (although the batch processor will be configured exactly the same way if the reference the same key in the config file). As an example, given the following config:

```yaml
processors:
queued_retry:
size: 50
per-exporter: true
enabled: true
batch:
send_batch_size: 10000
timeout: 10s
service:
pipelines:
traces: # a pipeline of “traces” type
receivers: [zipkin]
processors: [queued_retry]
processors: [batch]
exporters: [jaeger]
traces/2: # another pipeline of “traces” type
receivers: [opencensus]
processors: [queued_retry]
exporters: [opencensus]
receivers: [otlp]
processors: [batch]
exporters: [otlp]
```

When the Collector loads this config the result will look like this:

**TODO** Update picture and replace `"queued-retry" processor` with `"batch" processor`

![Processors](images/design-processors.png)

Note that each “queued_retry” processor is an independent instance, although both are configured the same way, i.e. each have a size of 50.
Note that each “batch” processor is an independent instance, although both are configured the same way, i.e. each have a send_batch_size of 10000.

## <a name="opentelemetry-agent"></a>Running as an Agent

Expand Down Expand Up @@ -186,7 +191,7 @@ tasks/agents that emit in one of the supported protocols. The Collector is
configured to send data to the configured exporter(s). The following figure
summarizes the deployment architecture:

TODO: update the diagram below.
**TODO:** update the diagram below.

![OpenTelemetry Collector Architecture](https://user-images.githubusercontent.com/10536136/46637070-65f05f80-cb0f-11e8-96e6-bc56468486b3.png "OpenTelemetry Collector Architecture")

Expand Down
15 changes: 8 additions & 7 deletions docs/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,14 @@ The `safe_rate` depends on the specific configuration being used.

### Queue Length

The `queued_retry` processor is recommended as the retry mechanism for the
Collector and as such should be used in any production deployment.
The `queued_retry` processor provides the
`otelcol_processor_queued_retry_queue_length` metric, besides others.
When this metric is growing constantly it is an indication that the Collector
is not able to send data as fast as it is receiving.
This will precede data loss and also can indicate a Collector low on resources.
Most exporters offer a [queue/retry mechanism](../exporter/exporterhelper/README.md)
that is recommended as the retry mechanism for the Collector and as such should
be used in any production deployment.

**TODO:** Add metric to monitor queue length.

Currently, the queue/retry mechanism only supports logging for monitoring. Check
the logs for messages like `"Dropping data because sending_queue is full"`.

### Receive Failures

Expand Down
5 changes: 2 additions & 3 deletions obsreport/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,8 @@
// * Data loss should be recorded only when the component itself remove the data
// from the pipeline. Legacy metrics for receivers used "dropped" in their names
// but these could be non-zero under normal operations and reflected no actual
// data loss when components like the "queued_retry" are used. New metrics
// were renamed to avoid this misunderstanding. Here are the general
// recommendations to report data loss:
// data loss when exporters with "sending_queue" are used. New metrics were renamed
// to avoid this misunderstanding. Here are the general recommendations to report data loss:
//
// * Receivers reporting errors to clients typically result in the client
// re-sending the same data so it is more correct to report "receive errors",
Expand Down

0 comments on commit 846b971

Please sign in to comment.