-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spool to disk error #12174
Comments
I'm still investigating this, but some preliminary details: the error is misleading, as the real condition isn't a lack of system memory but a lack of free pages in the disk spool. I've tried many different constraints on both current and old beats versions and e.g. a 4GB spool on a machine with 2GB memory works fine. The problem observed here is that a call to The default spool file size is only 100MB, so the (imperfect) configuration workaround would actually be to increase the spool size by adding e.g. |
I've noticed the same error with filebeat
Having the below config:
Not sure what caused that ERROR but after sometime I can see data being ingested. |
Same here for journalbeat Version 7.6.1.
I notice that a cron'ed The logged ERROR appears sporadically during the day, somehow resolving itself (?), then manifests at some point so ingest comes to a complete halt. EDIT: output is logstash, loadbalanced to two nodes, no further config. |
Seeing this as well with filebeat 7.7.1 and a
(times in log are UTC, screenshot is PST) |
Closing: disk spool is deprecated in favor of the disk queue |
I'm getting errors when I use the
feature
not enough memory to allocate 255 data page
2019-05-10T18:28:10.029Z ERROR [publisher] spool/inbroker.go:544 Spool flush failed with: pq/writer-flush: txfile/tx-alloc-pages: file='/var/lib/statsdbeat/spool.dat' tx=0: transaction failed during commit: not enough memory to allocate 255 data page(s)
This is running on a server with 4Gb of memory. What configuration settings are there to reduce the number of data pages.
Thanks
The text was updated successfully, but these errors were encountered: