Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.12](backport #40231) update max_number_of_messages parameter description #40339

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .buildkite/pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,8 +143,8 @@ def __init__(self):
self.files: list[str] = []

def get_pr_changeset(self) -> list[str]:
base_branch = os.getenv("BUILDKITE_PULL_REQUEST_BASE_BRANCH", "main")
kaiyan-sheng marked this conversation as resolved.
Show resolved Hide resolved
diff_command = ["git", "diff", "--name-only", "{}...HEAD".format(base_branch)]
hash = ["git", "rev-parse", "8.12"]
diff_command = ["git", "diff", "--name-only", "{}...HEAD".format(hash)]
result = subprocess.run(diff_command, stdout=subprocess.PIPE)
if result.returncode == 0:
self.files = result.stdout.decode().splitlines()
Expand Down
8 changes: 7 additions & 1 deletion x-pack/filebeat/docs/inputs/input-aws-s3.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,13 @@ The default is `10 MiB`.
==== `max_number_of_messages`

The maximum number of SQS messages that can be inflight at any time. Defaults
to 5.
to 5. When processing large amount of large size S3 objects and each object has
large amount of events, if this parameter sets too high, it can cause the input
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When processing large amount of large size S3 objects and each object has large amount of events, if this parameter sets too high, it can cause the input to process too many messages concurrently, overload the agent and cause ingest failure.

Perhaps something like:
Setting this parameter too high can overload Elastic Agent and cause ingest failures in situations where the SQS messages contain many S3 objects or the S3 objects themselves contain large numbers of messages.

Or minimally:

When processing a large number of large size S3 objects that each have a large number of events, if this parameter is set too high, it can cause the input to process too many messages concurrently, overloading the agent and causing ingest failure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@strawgate Thanks for the review!! Ugh sorry this is a backup PR. Let me change it in a main branch in a separate PR and backport it into these branches.

to process too many messages concurrently, overload the agent and cause ingest failure.
We recommend to keep the default value 5 and use the
{fleet-guide}/es-output-settings.html#es-output-settings-performance-tuning-settings[preset]
option to tune your Elastic Agent performance. You can optimize for throughput,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should specifically recommend either Balanced or Throughput for this use-case

scale, latency, or you can choose a balanced (the default) set of performance specifications.

[id="input-{type}-parsers"]
[float]
Expand Down
Loading