You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currenty Discover loads all fields not filtered via the index pattern field filters. This can become problematic if some fields contain very large data that users are not (yet) aware of, thus they might not have had a chance to filter it out via the index patter field filters. Also there might be the case where just some documents contain really large field values.
If such fields fall into the filter and they are displayed (for example, a log larger than a few megabytes), the browser starts to slow down at the time of download and, subsequently, at the time of output.
Is there any possibility, I propose to limit the size of the displayed field to some reasonable size that would not disrupt normal operation.
A similar challenge is available in Kibana, as well as discussions: elastic/kibana#98497
The text was updated successfully, but these errors were encountered:
@tarmacmonsterg thanks for opening this issue. I'm finding it a little difficult to understand the issue that you are facing here. Can you describe your problem in a little more detail or steps to reproduce your problem?
Heres what happens on the discover page today.
On page load, we have a request to retrieve the list of index patterns available
We also have a request to fetch all the fields available for a selected index pattern
Finally we have a third request to fetch the aggregation response for the timerange selected
Of the 3 requests, the third one is likely the largest and the solution you have proposed would not fix that since it addresses the second request.
@ashwin-pc The main problem is that a large log message (including more than 10 MB) can get into one entry. When I search, I can get a lot of similar messages. For example, if 10 messages of 10MB in size fall into the search criteria, the browser starts to slow down very much if the field with this message is visible. If the field with the message is hidden, then this data is not loaded, so the work is stable. The only thing that helped to somehow get around the problem was to hide the field with the message, and after that, one by one, open it and read it.
@tarmacmonsterg sorry for taking a while to get back to this. I get your point. So the query in discover is actually a request to the opensearch cluster directly. I think this issue meed to be move to the OpenSearch repo since they will need to add a provision in the search api to truncate field values.
Currenty Discover loads all fields not filtered via the index pattern field filters. This can become problematic if some fields contain very large data that users are not (yet) aware of, thus they might not have had a chance to filter it out via the index patter field filters. Also there might be the case where just some documents contain really large field values.
If such fields fall into the filter and they are displayed (for example, a log larger than a few megabytes), the browser starts to slow down at the time of download and, subsequently, at the time of output.
Is there any possibility, I propose to limit the size of the displayed field to some reasonable size that would not disrupt normal operation.
A similar challenge is available in Kibana, as well as discussions: elastic/kibana#98497
The text was updated successfully, but these errors were encountered: