-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow configuration of Agent(+)Beats Internal queue (on disk queue) #284
Comments
This issue was already described under Kibana issues |
@nimarezainia Would this fall under the "input settings" effort? |
I am going to move this to the Elastic-Agent repository |
@joshdover @zez3 this can't currently be done efficiently for it to make sense. These internal queues are a per beat concept and more closely aligned with writing to an output. Many of these parameters are there for users to modify the throughput they get from the beat. - closely aligned with the output queue parameters like bulk_max_size. More thought is going into this with the new shippers work and we will provide easier means for the users to manage throughput from the agent itself. IMO would have to be considered as part of the shipper and not an input parameter. (fyi @cmacknz ) |
Is this not what we need? Or am I not missing something here? Also, what do you mean efficiently? Are you referring to the 2 extra fsyncs or some specific fs like btrfs, or spool file locking? I am asking here for the option to be included in Fleet policy. With the appropriate limitations and page pool warnings. |
@joshdover |
@zez3 We are working on providing output level configurations (such as loadbalancing, disk queue and performance tuning). We don't have a timeline as yet when these will be available but needless to say this is prioritized. A lot of the issues to track you have seen already, listing them below. Be aware that these are tracking all the internal architectural work that's on going. It's unknown exactly which release the UI work would be done to expose the configuration parameters to the users. elastic/elastic-agent-shipper#7 |
As long as I have an APl that I can call, I would be happy to test the new shipper on disk queue. The UI part can be later implemented. And hopefully we ca finally solve drops in my deployment. ;) |
Yep we're discussing having a configuration API to enable this as a first step to enable testing use cases before we rollout the UI |
Is there a workaround that we can document in this issue? Like is possible to set |
Not sure if that would work for an Fleet Managed Agent with its underlying beats. |
Yes the agent policy only lets us control the contents of the beat |
Oh, now I remember, I tried half an year ago to abuse the Fleet policy API call and force an json containing the mem queue but that failed. I think I have discussed this, at that time with @ruflin |
If 8.4 will soon be released perhaps someone could allow a custom policy where we define such options? |
I found code in elastic-agent for an Perhaps we could add that back to spec files for Beats now? Or we wanted to be less prescriptive than the |
I like this idea, but one concern is the same that I raised here: elastic/elastic-agent-shipper#28 (comment) We're going to be moving to a single shared queue for all integrations with the shipper and if we expose these settings now, how they get applied once there's a single queue will not be the same, making the shipper rollout a breaking change. IMO we should defer on this to provide a good upgrade experience to the shipper once it's GA. |
We are planning to expose the queue parameters (although perhaps not the disk queue initially) as part of the agent output configuration, that work is tracked in elastic/beats#35615 |
Queue configuration has been added hence closing this as done. |
Describe the enhancement:
Enable the options to configure from Fleet (policy) the Internal Queue of my Agents and/or underlying Beats.
At the moment this is only available for standalone Agents.
https://www.elastic.co/guide/en/beats/filebeat/current/configuring-internal-queue.html
Describe a specific use case for the enhancement or feature:
When an Managed Agent dies or gets restarted we loose messages that where not yet consumed by ElasticSearch.
We would like to have the option to increase the buffer memory or since our disks are NVMe and future ones will be able to achieve even greater IO speeds we want to use the Internal disk queue to make sure that we do not miss messages.
The text was updated successfully, but these errors were encountered: