-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Train][Doc] Update PyTorch Data Ingestion User Guide #45421
[Train][Doc] Update PyTorch Data Ingestion User Guide #45421
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Love the added benefits & cleaner steps!
|
||
.. tab-set:: | ||
|
||
.. tab-item:: PyTorch Dataset and DataLoader | ||
.. tab-item:: PyTorch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original names were more explicit to make it clear that this is referring to the dataset framework, rather than the training framework.
@@ -276,34 +275,66 @@ At a high level, you can compare these concepts as follows: | |||
- n/a | |||
- :meth:`ray.data.Dataset.iter_torch_batches` | |||
|
|||
Why using Ray Data? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why using Ray Data? | |
Comparison with Ray Data |
**Option 1 (with Ray Data):** Convert your PyTorch Dataset to a Ray Dataset and pass it into the Trainer via ``datasets`` argument. | ||
Inside your ``train_loop_per_worker``, you can access the dataset via :meth:`ray.train.get_dataset_shard`. | ||
You can convert this to replace the PyTorch DataLoader via :meth:`ray.data.DataIterator.iter_torch_batches`. | ||
1. Convert your PyTorch Dataset to a Ray Dataset and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
1. Convert your PyTorch Dataset to a Ray Dataset and | |
1. Convert your PyTorch Dataset to a Ray Dataset. |
There are some other small typos/formatting errors that I'll review more thoroughly in a follow-up review.
|
||
For instructions, see :ref:`Ray Data for Hugging Face <loading_datasets_from_ml_libraries>`. | ||
**Option 2 (with HuggingFace Dataset):** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I understand why you chose to do this but I'm also a little worried this might be confusing since Option 1 technically does also use Hugging Face Datasets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I realized the difference now. Previously, this section aims at teaching users how to convert their HF dataset to Ray Dataset, then do training. But this PR tries to directly categorize on what we eventually use in the training function.
# prev
HF Dataset -> Ray Data -> HF Transformers
HF Dataset -> HF Transformers
# now
Ray Data -> HF Transformers
HF Dataset -> HF Transformers
My consideration here is that we'd better not force everyone to take the "HF Dataset -> Ray Data" conversion step.
For example, their original datasets format could be parquet, and before onboarding Ray, they already build a HF Dataset from parquet file, then feed it to HF Trainer.
In this case, they can either build ray dataset from parquet or from HF dataset.
# Before onboarding Ray
raw data -> HF dataset -> HF transformer
# After onboarding Ray
option 1: raw data -> HF dataset -> Ray Data -> HF transformer
v.s.
option 2: raw data -> Ray Data -> HF transformer
We can discuss more in person next week.
**Streaming execution**: | ||
|
||
- The preprocessing pipeline will be executed lazily and stream the data batches into training workers. | ||
- Training can start immediately without significant up-front preprocessing time. | ||
|
||
**Automatic data sharding**: | ||
|
||
- The dataset will be automatically sharded across all training workers. | ||
|
||
For more details, see the following sections for each framework. | ||
**Leverage additional resources for preprocessing** | ||
|
||
- Ray Data can utilize all resources in the Ray cluster for preprocessing, not just those on your training nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good content that I think everyone should read, regardless of whether or not they are starting with PyTorch data. Do you think we could bring this higher up in the guide (e.g. even in the introduction), and then reference it from here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Sounds good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High level comments:
- Can we copy some content from this blog post? This user guide should be the place to compare Ray Data against other data ingest solutions. Particularly, I'm thinking of copying over the diagrams as well as the table comparing against torch dataloader, HF dataset, tf data, etc.
- Proposed restructure of this guide:
(Ray Data + Ray Train) Quickstart
-> Code examples, with the "Option 1: Ray Data" moved over here under each framework.
Why use Ray Data?
-> Comparison with Other Data Ingest Solutions
-> Comparison table
Alternative to Ray Data Ingest (Framework-native Dataloaders)
-> "Ray Data is the recommended data loading solution for scalable blah blah blah, but Ray Train still integrates well with existing dataloading solutions you may be using, such as X, Y, Z.
-> Link to the framework user guides, since we already go over how to set up the framework native dataloaders.
Ray Data Configurations
-> all the remaining sections become subsections.
I think this structure fixes the problem where I was getting lost in the middle of the user guide because it suddenly starts talking about pytorch dataloader -- it wasn't clear that there are 2 separate paths: Ray Data vs. Alternatives. Now, we first put Ray Data front and center and make the case for it. Then, we talk about alternatives that are still integrated nicely.
- For a follow-up PR, it would be nice to have some more realistic examples. For example, show
read_parquet("s3://...")
instead of thefrom_items
dummy dataset that we have right now in the torch ray data quickstart. Can borrow this from the blog post again.
Discussed with Angelina and below is a proposal of the user guide:
Main considerations:
Any thoughts? @matthewdeng @justinvyu @hongpeng-guo |
I think the new proposal is very clear! Just one question whether we should put "Framework-native utilities` first, or "Ray data" first in this doc page. |
My preference would be to put Ray Data before the native ones. Though I do think it would be good either way to add a concise tip at the start of each section to point the user to the other section. |
@matthewdeng @hongpeng-guo ok, let's put "Ray Data" at first before the native ones, and cross link each other at the beginning of their section. I'll do it in the following PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's merge this one first!
Why are these changes needed?
This PR improves framework migration steps to Ray Data for data ingest user guide.
Follow-up PRs should do a larger restructure to make this guide more readable:
Future restructure plans
Restructured the "Starting from PyTorch Data" section for better readability. The following PR will incorporate the following restructure:
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.