-
Notifications
You must be signed in to change notification settings - Fork 7k
[Data] Prefetch data for PandasJSONDatasource
#54667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @bveeramani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on enhancing the performance of PandasJSONDatasource by addressing inefficiencies in its file I/O. It resolves an issue where repeated small read requests, especially when interacting with PyArrow's random-access files, led to poor performance. The core solution involves integrating a custom buffered reader to prefetch data, thereby minimizing I/O overhead and accelerating the data loading process.
Highlights
- Performance Optimization: Introduced a buffering mechanism within
PandasJSONDatasourceto significantly reduce the number of small, costly I/O requests when reading JSON files, particularly from cloud storage, by prefetching data. - New
StrictBufferedReaderClass: A customStrictBufferedReaderclass has been implemented. This wrapper ensures that underlying file objects are read with full buffering and prevents premature closure by external libraries like pandas, which is crucial for the datasource's double-read pattern. - Configurable Buffer Size: A
_BUFFER_SIZEconstant (set to 128 KiB) has been added toPandasJSONDatasource, allowing for a configurable buffer size to optimize data prefetching based on typical read patterns.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a StrictBufferedReader to improve performance for PandasJSONDatasource by enabling prefetching, which is a solid approach to mitigate the issue of numerous small reads from cloud storage. The implementation is sound. I have a couple of suggestions to enhance robustness and code clarity.
| with pd.read_json(f, chunksize=1, lines=True) as reader: | ||
| stream = StrictBufferedReader(f, buffer_size=self._BUFFER_SIZE) | ||
| with pd.read_json(stream, chunksize=1, lines=True) as reader: | ||
| df = _cast_range_index_to_string(next(reader)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method doesn't handle the case where the input file is empty. If f is an empty file, pd.read_json will return an empty iterator, and next(reader) will raise a StopIteration exception, causing the read to fail.
You can make this more robust by providing a default value to next(). This will handle empty files gracefully by treating them as an empty DataFrame.
| df = _cast_range_index_to_string(next(reader)) | |
| df = _cast_range_index_to_string(next(reader, pd.DataFrame())) |
Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how did u notice this?
| return df | ||
|
|
||
|
|
||
| class StrictBufferedReader(io.RawIOBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious, why call it Strict?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I observed that the built-in io.BufferedReader implementation doesn't do a good job of actually buffering the data. I think it's because pandas calls BufferedReader.read1, and read1 doesn't prefill the 1 MiB buffer.
The name "Strict" is used to denote that it always performs the buffering.
| closing the buffer. | ||
| 2. pandas wraps the file in a TextIOWrapper to decode bytes into text. TextIOWrapper | ||
| prefers calling read1(), which doesn't prefetch for random-access files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is read1 supposed to be read()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, TextIOWrapper calls read1 I think
Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu>
@iamjustinhsu I ran a batch inference workload, and noticed that the read tasks were unacceptably slow. |
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> `PandasJSONDatasource` reads a file twice: once to sample a row and infer the numbers of rows to read per batch, and again to actually load the data. To reset the file after sampling, the datasource opens the file as a random-access file. The issue is that PyArrow's random-access file doesn't prefetch enough data, which leads to many costly small requests and poor performance. To mitigate this issue, this PR wraps the file in `io.BufferedReader` and prefetches more data. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: alimaazamat <alima.azamat2003@gmail.com>
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> `PandasJSONDatasource` reads a file twice: once to sample a row and infer the numbers of rows to read per batch, and again to actually load the data. To reset the file after sampling, the datasource opens the file as a random-access file. The issue is that PyArrow's random-access file doesn't prefetch enough data, which leads to many costly small requests and poor performance. To mitigate this issue, this PR wraps the file in `io.BufferedReader` and prefetches more data. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: Krishna Kalyan <krishnakalyan3@gmail.com>
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> `PandasJSONDatasource` reads a file twice: once to sample a row and infer the numbers of rows to read per batch, and again to actually load the data. To reset the file after sampling, the datasource opens the file as a random-access file. The issue is that PyArrow's random-access file doesn't prefetch enough data, which leads to many costly small requests and poor performance. To mitigate this issue, this PR wraps the file in `io.BufferedReader` and prefetches more data. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: jugalshah291 <shah.jugal291@gmail.com>
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> `PandasJSONDatasource` reads a file twice: once to sample a row and infer the numbers of rows to read per batch, and again to actually load the data. To reset the file after sampling, the datasource opens the file as a random-access file. The issue is that PyArrow's random-access file doesn't prefetch enough data, which leads to many costly small requests and poor performance. To mitigate this issue, this PR wraps the file in `io.BufferedReader` and prefetches more data. ## Related issue number <!-- For example: "Closes ray-project#1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :( --------- Signed-off-by: Balaji Veeramani <bveeramani@berkeley.edu> Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
## Description ### Status Quo This PR #54667 addressed issues of OOM by sampling a few lines of the file. However, this code always assumes the input file is seekable(ie, not compressed). This means zipped files are broken like this issue: #55356 ### Potential Workaround - Refractor reused code between JsonDatasource and FileDatasource - default to 10000 if zipped file found ## Related issues #55356 ## Additional information > Optional: Add implementation details, API changes, usage examples, screenshots, etc. --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
## Description ### Status Quo This PR ray-project#54667 addressed issues of OOM by sampling a few lines of the file. However, this code always assumes the input file is seekable(ie, not compressed). This means zipped files are broken like this issue: ray-project#55356 ### Potential Workaround - Refractor reused code between JsonDatasource and FileDatasource - default to 10000 if zipped file found ## Related issues ray-project#55356 ## Additional information > Optional: Add implementation details, API changes, usage examples, screenshots, etc. --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
## Description ### Status Quo This PR #54667 addressed issues of OOM by sampling a few lines of the file. However, this code always assumes the input file is seekable(ie, not compressed). This means zipped files are broken like this issue: #55356 ### Potential Workaround - Refractor reused code between JsonDatasource and FileDatasource - default to 10000 if zipped file found ## Related issues #55356 ## Additional information > Optional: Add implementation details, API changes, usage examples, screenshots, etc. --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
## Description ### Status Quo This PR ray-project#54667 addressed issues of OOM by sampling a few lines of the file. However, this code always assumes the input file is seekable(ie, not compressed). This means zipped files are broken like this issue: ray-project#55356 ### Potential Workaround - Refractor reused code between JsonDatasource and FileDatasource - default to 10000 if zipped file found ## Related issues ray-project#55356 ## Additional information > Optional: Add implementation details, API changes, usage examples, screenshots, etc. --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com>
## Description ### Status Quo This PR ray-project#54667 addressed issues of OOM by sampling a few lines of the file. However, this code always assumes the input file is seekable(ie, not compressed). This means zipped files are broken like this issue: ray-project#55356 ### Potential Workaround - Refractor reused code between JsonDatasource and FileDatasource - default to 10000 if zipped file found ## Related issues ray-project#55356 ## Additional information > Optional: Add implementation details, API changes, usage examples, screenshots, etc. --------- Signed-off-by: iamjustinhsu <jhsu@anyscale.com> Signed-off-by: Aydin Abiar <aydin@anyscale.com>
Why are these changes needed?
PandasJSONDatasourcereads a file twice: once to sample a row and infer the numbers of rows to read per batch, and again to actually load the data. To reset the file after sampling, the datasource opens the file as a random-access file.The issue is that PyArrow's random-access file doesn't prefetch enough data, which leads to many costly small requests and poor performance.
To mitigate this issue, this PR wraps the file in
io.BufferedReaderand prefetches more data.Related issue number
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.