Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify memory limiter operation #1084

Closed
tigrannajaryan opened this issue Jun 4, 2020 · 8 comments
Closed

Clarify memory limiter operation #1084

tigrannajaryan opened this issue Jun 4, 2020 · 8 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@tigrannajaryan
Copy link
Member

tigrannajaryan commented Jun 4, 2020

It is not entirely clear what currently happens when memory limiter hits the limit. Do we continue receiving data and we drop it? Or we stop receiving data? Do receivers indicate to their senders to stop sending, i.e. backpressure/throttle them?

TODO:

  1. We need to document the behavior clearly and make any corresponding changes needed in the memory limiter code.
  2. We need to make sure all receivers consistently follow the documented behavior.
  3. We may likely want to make the behavior configurable, i.e. drop vs backpressure. Depending on the use-case the desired behavior may be different.
@nilebox
Copy link
Member

nilebox commented Jun 17, 2020

Currently Memory Limiter drops data and returns an error back to receiver:

, so the action is actually specific to the receiver implementation - it may ignore the error, log it, retry etc.

We should probably at least define a public type / const for this error to be able to consistently process it everywhere.

Additionally, it may be useful to implement some callback interface to be called by MemoryLimiter or any other component to backpressure or stop the pipeline from receiving data, or perform some other action. It may even be useful for exporter receiving API throttling errors like in #1121 to stop the pipeline.

Definining such interface will allow to directly control the pipeline rather than rely on error handling in every receiver.

@tigrannajaryan
Copy link
Member Author

3. We may want to make the behavior configurable, i.e. drop vs backpressure. Depending on the use-case the desired behavior may be different.

@flands here is the issue I mentioned.

@flands
Copy link
Contributor

flands commented Jun 18, 2020

Definitely need 3. I also still confused why it is a processor :) you can only define once per instance right? it applies to all pipelines (or the least defined one wins for all pipelines. Then we also have: #1078

@flands flands added this to the GA 1.0 milestone Jun 18, 2020
@tigrannajaryan
Copy link
Member Author

tigrannajaryan commented Jun 18, 2020

It has to be a processor. Today it is the only way you can affect the pipeline (drop the data or return errors to receivers).
We can turn the memory limiter into a special concept that does not fit our receiver/exporter/processor/extension component model but I am not sure it is necessary.

@owais
Copy link
Contributor

owais commented Jun 18, 2020

@flands It might not conceptually be a "processor" as it does not process data but it is part of the data pipeline. Data has to flow through it in order for it to be able to drop it. This is a very simple design that fits quite well with the rest of the system. Data goes into the memory limiter "processor" and depending on memory pressure, it'll either come out on the other side or never be seen again.

@tigrannajaryan tigrannajaryan modified the milestones: GA 1.0, Backlog Oct 2, 2020
@andrewhsu andrewhsu added enhancement New feature or request and removed feature request labels Jan 6, 2021
@github-actions github-actions bot added the Stale label Jan 11, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 12, 2023
@tigrannajaryan
Copy link
Member Author

This should not be closed. It is important to clarify how the Collector works in memory limited mode and what we expect the receiver implementations to do.

@tigrannajaryan
Copy link
Member Author

cc @open-telemetry/collector-approvers

tigrannajaryan added a commit to tigrannajaryan/opentelemetry-collector that referenced this issue Mar 31, 2023
Contributes to open-telemetry#1084

- Clarify what the memory limiter does.
- Set expectations from receivers, how they are supposed to react
  when the memory limiter refuses the data.

All receivers must adhere to this contract. See for example
an issue opened against filelog receiver:
open-telemetry/opentelemetry-collector-contrib#20511

Future work: one additional thing we can do is implement a backoff
logic in the memory limiter. When in memory limited mode the processor
can introduce pauses before it returns from the ConsumeLogs/Traces/Metrics
call. This will allow to slow down the inflow of data into the Collector
and give time for the pipeline to clear and memory usage to return to the
normal. This needs to be explored further.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-collector that referenced this issue Mar 31, 2023
Contributes to open-telemetry#1084

- Clarify what the memory limiter does.
- Set expectations from receivers, how they are supposed to react
  when the memory limiter refuses the data.

All receivers must adhere to this contract. See for example
an issue opened against filelog receiver:
open-telemetry/opentelemetry-collector-contrib#20511

Note that there are no functional changes to the memory limiter.

Future work: one additional thing we can do is implement a backoff
logic in the memory limiter. When in memory limited mode the processor
can introduce pauses before it returns from the ConsumeLogs/Traces/Metrics
call. This will allow to slow down the inflow of data into the Collector
and give time for the pipeline to clear and memory usage to return to the
normal. This needs to be explored further.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-collector that referenced this issue Mar 31, 2023
Contributes to open-telemetry#1084

- Clarify what the memory limiter does.
- Set expectations from receivers, how they are supposed to react
  when the memory limiter refuses the data.
- Add a test that demonstrates that memory limiter does not lose data
  if the receiver and exporter behave according to the contract.

All receivers must adhere to this contract. See for example
an issue opened against filelog receiver:
open-telemetry/opentelemetry-collector-contrib#20511

Note that there are no functional changes to the memory limiter.

Future work: one additional thing we can do is implement a backoff
logic in the memory limiter. When in memory limited mode the processor
can introduce pauses before it returns from the ConsumeLogs/Traces/Metrics
call. This will allow to slow down the inflow of data into the Collector
and give time for the pipeline to clear and memory usage to return to the
normal. This needs to be explored further.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-collector that referenced this issue Mar 31, 2023
Contributes to open-telemetry#1084

- Clarify what the memory limiter does.
- Set expectations from receivers, how they are supposed to react
  when the memory limiter refuses the data.
- Add a test that demonstrates that memory limiter does not lose data
  if the receiver and exporter behave according to the contract.

All receivers must adhere to this contract. See for example
an issue opened against filelog receiver:
open-telemetry/opentelemetry-collector-contrib#20511

Note that there are no functional changes to the memory limiter.

Future work: one additional thing we can do is implement a backoff
logic in the memory limiter. When in memory limited mode the processor
can introduce pauses before it returns from the ConsumeLogs/Traces/Metrics
call. This will allow to slow down the inflow of data into the Collector
and give time for the pipeline to clear and memory usage to return to the
normal. This needs to be explored further.
pull bot pushed a commit to boost-entropy-golang/opentelemetry-collector that referenced this issue Apr 3, 2023
…etry#7459)

Contributes to open-telemetry#1084

- Clarify what the memory limiter does.
- Set expectations from receivers, how they are supposed to react
  when the memory limiter refuses the data.
- Add a test that demonstrates that memory limiter does not lose data
  if the receiver and exporter behave according to the contract.

All receivers must adhere to this contract. See for example
an issue opened against filelog receiver:
open-telemetry/opentelemetry-collector-contrib#20511

Note that there are no functional changes to the memory limiter.

Future work: one additional thing we can do is implement a backoff
logic in the memory limiter. When in memory limited mode the processor
can introduce pauses before it returns from the ConsumeLogs/Traces/Metrics
call. This will allow to slow down the inflow of data into the Collector
and give time for the pipeline to clear and memory usage to return to the
normal. This needs to be explored further.
@tigrannajaryan
Copy link
Member Author

I am going to close this since I think the current documentation is now clear about how memory limiter works.

Any additional changes (e.g. adding an option to choose between refusing and droping) can be filed as a new issue.

hughesjj pushed a commit to hughesjj/opentelemetry-collector that referenced this issue Apr 27, 2023
…y#1084)

Bumps [boto3](https://github.com/boto/boto3) from 1.20.29 to 1.20.30.
- [Release notes](https://github.com/boto/boto3/releases)
- [Changelog](https://github.com/boto/boto3/blob/develop/CHANGELOG.rst)
- [Commits](boto/boto3@1.20.29...1.20.30)

---
updated-dependencies:
- dependency-name: boto3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants