-
-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should the pickle speed up be supported after Parquet integration? #1027
Comments
Hi, thanks for bringing this up. From the top of my head I think we don't to have pickle files that are also available Parquet (I'm not sure if we'll drop the arff immediately). However, as these are 72 vastly different datasets I was wondering whether there's are any assumptions in Parquet about the data structure that benefit certain datasets (at least for the format added by @sahithyaravi1493 there was quite some overhead for wide datasets). So I think before making a decision it would be good the per-dataset difference. What do you think about that? |
I do think a full switch over to parquet is actually reasonable in the near future (as long as the server has the parquet files), though I don't want to do that in a single release cycle either. A bit more profiling based on dataset characteristics seems reasonable. |
Thanks, that makes sense. Will you extend your notebook to contain such stats? |
Another question: will we in the near future then drop liac-arff as a dependency? This would allow us to also stop kinda maintaining that and we should let the scikit-learn folks know about this. |
Yes, but I'll do that after the next release (first we have side-by-side support and we'll keep pickling).
As soon as all data is available in parquet format I would be in favor of a major code cleaning removing all arff logic (and thus also no longer require |
I fully agree on that and am looking forward to that! |
I am currently working on adding Parquet support to
openml-python
.Parquet files will load a lot faster than arff files, so I was wondering if we should still aim to also provide the speed improvement through keeping the pickle files. I did a small speed test to record speed loading the cc-18 from disk, the suite has 72 datasets varying from a few kb to 190 mb in size (parquet/pickle size).
Loading all datasets in the suite takes ~6 seconds for parquet, ~700ms for pickle and ~7min for arff.
We see that loading the difference from pickle and parquet is still big relatively speaking, but in absolute numbers parquet still loads the entire suite of 72 datasets in a ~6 seconds. I think it's worth evaluating whether or not we want to keep pickle files.
The obvious the drawback is slower loads, though the difference might not be noticeable in most cases.
Getting rid of the pickle files would have the following benefits:
@mfeurer
The text was updated successfully, but these errors were encountered: