Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support for large query results #78

Open
arikfr opened this issue Feb 5, 2014 · 39 comments
Open

Better support for large query results #78

arikfr opened this issue Feb 5, 2014 · 39 comments
Labels

Comments

@arikfr
Copy link
Member

arikfr commented Feb 5, 2014

If a query has large result (~50K rows) it will make the UI to get stuck. We need to detect large results sets and handle them differently (server side pagination?).

@natict
Copy link

natict commented Mar 10, 2014

No pagination + Indicator + CSV download
should be a good start :)

@arikfr
Copy link
Member Author

arikfr commented Mar 10, 2014

yep, another optimization is to mark big data sets when we store the query result object.

@arikfr
Copy link
Member Author

arikfr commented Jul 26, 2015

@ChrisLocus
Copy link

Anybody heard anything about any work on this front? Running into this now.

@eshubhamgarg
Copy link

@arikfr Do you have any updates on this one ? Like when can be expect a feature release

@arikfr
Copy link
Member Author

arikfr commented Jan 31, 2017

It's very low priority compared to other stuff, as usually you don't need large result sets in Redash. So far on work was done on this one.

@adfel70
Copy link

adfel70 commented Apr 26, 2017

Hi,
Can you explain what the bottleneck is?
Thanks

@bboe
Copy link

bboe commented May 30, 2017

@arikfr how open would you be to a pull request in this area? I have little knowledge of Redash internals, however, we would like to solve the issue and may be able to throw some resources at it if we can work to get any changes incorporated into the project.

Do you have a ballpark estimate on the amount of effort it would require to detect a large result set, and offer a download?

@arikfr
Copy link
Member Author

arikfr commented Jun 1, 2017

@bboe how open? very much :) this is low priority for me, but I definitely want to better handle this.

It's hard to give an estimate without looking into this in more detail & understanding what kind of solution you want to achieve. Shoot me an email and let's talk further (arik at redash io).

@jesse-osiecki
Copy link

@arikfr @bboe was a pull request ever made regarding this?

@bboe
Copy link

bboe commented Aug 23, 2017

Not from my end. Development time for value ended up not being worth it.

@antwan
Copy link

antwan commented Sep 20, 2017

Value is to use redash to export/browse large sets of data. Currently this is only suitable for statistics generation.

A quick workaround would be to add an option truncate data on the backend (after 1000 entries) so users can still hit the button "export" without laggy UI due to massive JSON being parsed.

@jezdez
Copy link
Member

jezdez commented Aug 16, 2018

This has been merged! ✨

@jezdez jezdez closed this as completed Aug 16, 2018
@arikfr arikfr reopened this Aug 27, 2018
@arikfr
Copy link
Member Author

arikfr commented Aug 27, 2018

@jezdez this issue is about large query results and not a long list of queries :)

@jezdez
Copy link
Member

jezdez commented Aug 28, 2018

Ugh, being able to read would clearly be an advantage 😬

@changchichung
Copy link

is this issue solved ? I gave same situation when return rows > 50K .

@arikfr
Copy link
Member Author

arikfr commented Sep 7, 2018

@changchichung unfortunately not yet. Although if you don't have much more than 50K, maybe just giving more memory to Redash will resolve your issue.

@koooge
Copy link
Contributor

koooge commented Oct 19, 2018

+1
Version 5.0.1+b4851 on EC2 t2.small
redash server cannot resnponse during its processing.

@ismailsimsek
Copy link

Version 5.0.1+b4851 on EC2 t2.small EC2 m3.large
getting "redash Worker exited prematurely: signal 9 (SIGKILL)."
our main requirement is ability to download large datasets

@arikfr
Copy link
Member Author

arikfr commented Dec 15, 2018

@ismailsimsek try using a larger instance (depends on the dataset size you're trying to download).

@arikfr arikfr added the backlog label Dec 17, 2018
@arikfr arikfr mentioned this issue Jan 8, 2019
@ismailsimsek
Copy link

ismailsimsek commented Jan 17, 2019

@arikfr what do you think about adding pagination to query_runner? using server side cursor where database is handling the large result set. then client application can process the result in batches.
This probably requires an message in the UI when the full result-set is not passed to UI.

Thanks for the great software btw.

@arikfr
Copy link
Member Author

arikfr commented Jan 20, 2019

@ismailsimsek pagination/server side cursors won't help without changing how we store the data, because we can't stream the data into Postgres (where we currently store results cache). Also it won't help with serving the results to the browser, because we serve the results to the browser from the cache.

It will help once we change how we store the results and will significantly reduce the memory footprint of the workers.

@harveyrendell
Copy link

@arikfr I've been following this issue and I'm keen to contribute back if possible. We've had to deal with bad queries locking up our whole Redash service and would like a way to limit the maximum response sizes that are returned, (either response size in memory or row count).

Could a minimum solution to this simply be adding a configuration option to set a maximum query size, and fail safely if it is exceeded. Some use-cases have been mentioned that include paging the query results into the database and I'm interested to hear how these might be made available for download e.g. as csv.

@diwu1989
Copy link

Can we set up a config for enforcing limit clause automatically?
Many SQL IDEs do this by default to provide better user experience and prevent users from shooting themselves in the foot.

Default limit to 10k is a reasonable threshold. Nobody actually pages through 10k of results line by line anyways and their UI would stutter.

@arikfr
Copy link
Member Author

arikfr commented Jul 1, 2020

Can we set up a config for enforcing limit clause automatically?

Yes.. just need to find a way to do it in a "scalable" way for all the data sources (not all of them have to support it though).

@syang61-dev
Copy link

Can we set up a config for enforcing limit clause automatically?

Yes.. just need to find a way to do it in a "scalable" way for all the data sources (not all of them have to support it though).

@arikfr

Can we make the result payload ‘paginated’ and have a default page size of 1000 rows?

Major ‘big data’ query engines seem to have such nob to control, can we borrow the idea here? API May look something like the following:

GET queries/{queryID}?page=N&pageSize=1000

the above api will make the backend execute the corresponding SQL statement on top of the cache

Isn’t this something a ‘scalable’ solution? If it is, I’d be happy to see how could I help (I am simply a user running into this situation now).

@susodapop
Copy link
Contributor

I agree pagination makes sense. But we store cached results as serialised JSON today. So even if we fetch 1000 records at a time, each request would deserialise the entire result before plucking some some rows and returning them.

This is fine for result sets <50k rows. But if a user runs a query with 1m rows the serialisation overhead would balloon 🤔

@syang61-dev
Copy link

syang61-dev commented Jul 23, 2021

I agree pagination makes sense. But we store cached results as serialised JSON today. So even if we fetch 1000 records at a time, each request would deserialise the entire result before plucking some some rows and returning them.

This is fine for result sets <50k rows. But if a user runs a query with 1m rows the serialisation overhead would balloon 🤔

For results with 1m rows, maybe we could have the result (BI result cache) chunked and store those chunks. If the community is serious about working out a solution, please let me know and I'd like to see how I could help.

@susodapop
Copy link
Contributor

We're not going to work on this until at least after the V10 release later this summer. Later this year we'll introduce some processes for improving work planning with the OSS community as we don't want to see this work stagnate. I'll ping this issue once that channel is available.

@williswyf
Copy link

Hi there,
is there update on this issue? I still got this issue on v10.1. I just expect that my user is able to run query and download the big result set, and no need to catch it into postgresql.

@susodapop
Copy link
Contributor

No update to share at this time. But we have not forgotten about this use case.

and no need to catch it into postgresql.

The results are always cached. Because running the query and downloading results are distinct tasks. Postgres is where Redash saves the state (query result) between these tasks. We can't skip the cache without a significant redesign.

@lscapim
Copy link

lscapim commented Oct 17, 2023

Hi,
Is there update on this issue?

@spapas
Copy link

spapas commented Nov 23, 2023

Hello friends, this issue is like 10 years old. Will anybody give any love to this or we'll wait until it gets a driver's license ?

@guidopetri
Copy link
Contributor

Since our community-led launch, we're all doing this as a side project, so priorities may be different. I'd happily accept a PR though.

@wtfiwtz
Copy link

wtfiwtz commented Dec 11, 2023

I think replacing the JSON encoder and using Flask streaming are good options.
https://github.com/getredash/redash/blob/master/redash/utils/__init__.py#L107-L121
Flask stream_with_context - https://stackoverflow.com/questions/71991359/how-to-make-flask-stream-output-line-by-line
Fixing the front-end pagination to convert to back-end pagination is a much larger piece of work.
I've been planning to test this theory out but something else got a higher priority. I'll see how I go later this week or next.

See also #6218

@wtfiwtz
Copy link

wtfiwtz commented Dec 12, 2023

Here's a first attempt: orchestrated-io@7540768

You'll have to excuse my React skills, I haven't got that part correct yet. This just downloads 1000 rows at first, and then attempts to refresh the query result table (visualization) with the additional data when it arrives later.

I attempted to use send_file, stream_with_context, and the Python JSON encoder (ujson didn't work as you need custom encoding for fields such as dates) but they had a negligible impact on a 120k row Google Sheets data source (cached - ~10Mb).

Removing the whitespace from JSON did help a bit - as it reduces the file size by about 1/6th.

I'll have another go at the React component later.

Cheers,
Nigel

@wtfiwtz
Copy link

wtfiwtz commented Jan 9, 2024

Here's a better version... it's a bit hacky but it does the job for now.
orchestrated-io#2

@wtfiwtz
Copy link

wtfiwtz commented Mar 20, 2024

If you do apply this, I also recommend upgrading gunicorn as you might see TCP disconnects when gunicorn goes to shutdown its workers after X requests have been processed. See #5641 (comment)

@zachliu
Copy link
Contributor

zachliu commented Jun 14, 2024

For query engines such as AWS Athena, I wish there will be a way to:

  • Show only the partial results if data set is too large
  • Pass through the S3 URL as the download link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests