-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Reporting] Check Elasticsearch http.max_content_length
against Kibana xpack.reporting.csv.maxSizeBytes
#26383
Comments
Pinging @elastic/kibana-app |
Taking a look now, whilst #26307 is landing |
Initial pass has landed here: #26482. Regarding:
The answer here is that changing the value is reflected in the cluster API's response. However, different nodes can have differing settings for this value. So there isn't a bullet-proof way to guarantee that each node will react similarly. Also, we aren't checking if a node gets restarted with a different value, which can break any safeguards we build in on start-up. Another alternative here is doing a pre-flight check on save to see if the node reporting persists to will be able to handle the body-size. I'm not sure how load-balancing happens in ES when there's multiples nodes in a cluster... so even that check can hit a different node than the actual document persistance call does. |
It's not only this ES setting that can potentially break functionality, but a reverse-proxy in front of ES. Therefore it doesn't pay off to get too into safety checking for them. I think the best we can aim to do is 2 things:
We can also document this as a problem scenario in the troubleshooting guide, and the documentation could explain how to look in the ES settings and logs (we'll have to check if ES would actually log it) to see if it's an ES problem, or a problem with a reverse-proxy. [*] preserving the error messages with that field feels wrong to me, but that's a different conversation |
#26482 closed! |
If
xpack.reporting.csv.maxSizeBytes
(default 10mb) in kibana.yml is greater thanhttp.max_content_length
in Elasticsearch (default 100mb), that would be an invalid stack configuration.We can probably detect this automatically, and log a warning if needed, by checking the
max_content_length
in ES with:http://localhost:9200/_cluster/settings?include_defaults=true
but first let's verify that changing the value in elasticsearch.yml will reflect in the API response. I'm not sure what happens if that value is set differently across multiple ES nodes.The text was updated successfully, but these errors were encountered: