-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Rewrite dataloader, improves responsiveness and reliability #13447
Conversation
Thanks that's a massive improvement, especially for datasets that gets iterated in a small number of iterations (small datasets or big batch sizes). The killing and respawning of workers was one reason why I was questioning more and more the fact that Gluon is privileging the concept of epochs vs number of iterations in measuring progress through training for checkpointing, iterating, etc. The next improvements I would love to see for the data loader would be the cold start issue at beginning of epoch:
|
@mxnet-label-bot add [Data-loading, pr-awaiting-review] |
self._prefetch = max(0, int(prefetch) if prefetch is not None else 2 * self._num_workers) | ||
if self._num_workers > 0: | ||
def worker_initializer(data): | ||
global _worker_dataset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this break when using multiple DataLoader with different datasets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there is one global dataset per worker process, different data loaders have independent process pools
@ThomasDelteil regarding point 1, this is currently available in GluonNLP with
The usage-pattern may be made more user-friendly, but the basic idea may also be applicable to Dataset API? |
@leezu, the problem with the stream is that the shuffling if it happens is to be done ahead of time so that the sequential access remains random. Currently one solution that works and that I prefer, with the current Dataset and DataLoader API, is to use a continuous batch sampler and effectively iterate through your batches rather than using the concept of epoch. You can design your continuous batch sampler to return an iterator that is effectively creating |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Nice improvement
python/mxnet/recordio.py
Outdated
@@ -182,6 +188,10 @@ def read(self): | |||
Buffer read. | |||
""" | |||
assert not self.writable | |||
if not self.pid == current_process().pid: | |||
# in forked process, obtain a new handle | |||
# print("PID not matching, reset") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove unused code
@ThomasDelteil I'm not sure if I can follow the argument. You can view the current DataLoader as exposing a stream of batches. The sampling (shuffling) still happens lazily. In that case the data pipeline is composed of both Dataset (with random access for the lazy shuffling) and the notion of Stream over batches that can be prefetched |
@@ -299,7 +278,7 @@ def shutdown(self): | |||
self._shutdown = True | |||
|
|||
|
|||
class DataLoader(object): | |||
class DataLoaderV1(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems the new DataLoader
preserves the API, so why not remove DataLoaderV1
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If some specific implementations were relying on the old queue based methods, I think leaving the older version is still preferable for some user, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong opinion about it. If you deem the reliance on queue as reliance on an undocumented implementation detail of V1, then it may be better to remove from Gluon and copy the V1 code over to the specific implementation? It may be reduce the number of APIs to maintain in the future. But I'm also OK with keeping it here if you prefer
@leezu ok, I believe I hadn't well understood the PrefetchingStream API you described above. Makes sense now after second read. |
I am merging this now so that it can be cherry-picked to v1.4 branch quickly |
…iability (#13447) * fix recordio.py * rewrite dataloader with pool * fix batch as tuple * fix prefetching * fix pylint * picklable function * use pickle * add missing commit
I've added this PR to 1.4.x branch. |
…ile (#13478) * updated to v1.5.0 * Bumped minor version from 1.4.0 to 1.5.0 on master * added Anirudh as maintainer for R package ... adding something useful and re-trigger PR check * Updated license file for clojure, onnx-tensorrt, gtest, R-package * Get the correct include path in pip package (#13452) * add find_include_path API * address reviewer comment * change return type from list to string * add unit test * address reviewer comment * address reviewer comment * address reviewer comment * address reviewer comment * fix include path problem in pip package * add comment * fix lint error * address reviewer comment * address reviewer comment * Use ~/.ccache as default ccache directory so is not cache is not erased on reboot (#13431) * Skip flaky test #13446 (#13480) * Rewrite dataloader with process pool, improves responsiveness and reliability (#13447) * fix recordio.py * rewrite dataloader with pool * fix batch as tuple * fix prefetching * fix pylint * picklable function * use pickle * add missing commit * Fix errors in docstrings for subgraph op; use code directive (#13463) * [MXNET-1158] JVM Memory Management Documentation (#13105) * update train_mnist * Add documentation for JVM Memory Management * update doc * address nit picks * address nit picks * Grammar and clarity edits for memory management doc * Edits for scala memory management * Update memory-management.md * Update memory-management.md * Update memory-management.md * capitalization fix * Update row_sparse tutorial (#13414) Update row_sparse tutorial * Add resiliency to onnx export code (#13426) * Added resiliency to onnx export code - With previous infer-shape implementation, if input shape was list instead of tuple or if extra non-existent parameters were provided, the code would still work. The fixes in this commit make sure that behavior is restored to prevent any compatibility issues with existing export code. * Fixed name of net in unittest * Fix pylint * [MXNET-1185] Support large array in several operators (part 1) (#13418) * fix a few operators with large arrays (# of elements) * fix bug in broadcast_div and add tests * address reviewer comment * add unit test * add empty line * retrigger CI * [MXNET-1210 ] Gluon Audio - Example (#13325) * Initialized the example * Addressed PR comments, about existing synset.txt file - no overwrite * RST - docstring issues fixed * added README * Addressed PR comments * Addressed PR comments, checking Divide by 0 * Raising error if format is not supported. * changed a line for ndarray of labels * Trigger CI * Trigger CI * PR comments addressed around skip_header argument * Addressed PR comments around librosa import * PR Comments * Passing lazy=lazy from argument * Added PR comments, labels to README.MD * Trigger CI * Addressing PR Comments in README * Modified README.md * Added example under audio folder * Retrigger CI * Retrigger CI * ONNX export: Instance normalization, Shape (#12920) * ONNX import/export: Make backend_rep common * ONNX export: Instance Normalization * ONNX export: Shape operator * Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495) * clarify ops faq regarding docs strings (#13492) * Add graph_compact operator. (#13436) * add graph_compact. * fix. * add doc. * add tests for graph_compact. * address comments. * update docs. * trigger CI * Deprecate Jenkinsfile (#13474) * update github location for sampled_block.py (#13508) Updated to https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/model/sampled_block.py * #13453 [Clojure] - Add Spec Validations to the Optimizer namespace (#13499) * ONNX export: Logical operators (#12852) * Fix cmake options parsing in dev_menu (#13458) Add GPU+MKLDNN unittests to dev_menu * Revert "Manually track num_max_thread (#12380)" (#13501) This reverts commit 7541021. * Feature/mkldnn static 2 (#13503) * build mkldnn as static lib * update makefile to statically build mkldnn * build static mkldnn * fix static name * fix static name * update static for mac * rename mkldnn dep in ci * remove moving mkldnn dynamic lib * remove commented code * remove mkldnn dnaymic for unitest * force static for mkldnn lib * remove dynamic mkldnn bind * only link windows * add mkldnn.mk * try force linking * remove mkldnn dynanmic check * remove test mkldnn install * fix spacing * fix index * add artifacts * add comment about windows * remove static * update makefile * fix toctree Sphinx errors (#13489) * fix toctree errors * nudging file for CI * Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (#13527) * [MXNET-1234] Fix shape inference problems in Activation backward (#13409) * Provide a failing test for ReLU activation shape inference bug * Fix Activation backward shape inference fixes: #13333 * Add softsign Activation to test_gluon.py * Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now * Don't disable MKLDNN
Is it possible to implement another Dataloader with threadpool? According to my test, Dataloader with threadpool is faster than Dataloader with multiprocess(about 2~5x), especially when the dataset is small or most of codes are written with C++, for example, rotating the image is written by Opencv library. And it has less bugs, less requirements. Because some people are running there program with docker and maybe their shared memory is not enough. And Dataloader with threadpool start more quickly as its overhead is smaller. I am wonder if mxnet can give us a choise, using threadpool or multiprocess.pool. |
You may just add an option to use |
what is the basis for this comment. If you have multiple CPU cores isn't multiprocessing more efficient, has easier memory management and bypasses GIL? |
@anirudh2290 multiprocessing is often extremely inefficient. The reason is that any object that is passed between the processes must first be pickled and unpickled. Pickling is very slow. Using multi-threading of course requires extra care to bypass the GIL, but can make sense if the pickling overhead is too much. I assume @kohillyang has such case. |
Yes I think the problem with multi-threading is to make sure the user understands that to get a real parallel processing boost, it needs to be done in the underlying libraries like opencv to bypass the GIL. I think there's a case for it but it shouldn't be the default and have appropriate warning in the documentation. I believe different pickling protocols can be chosen to trade off compute for memory @leezu ? Maybe we could also expose that to the users |
@ThomasDelteil yes, threading should certainly not be the default. Unfortunately Python does not allow choosing the Pickle protocol version in the multiprocessing context. "Multiprocessing uses default protocol and there is no simple (without hacking the stdlib) way to pass the right protocol manually." https://bugs.python.org/issue26507 https://bugs.python.org/issue28053 https://bugs.python.org/issue23403 Thus we are stuck with a high pickling overhead when using multiprocessing for the foreseeable future |
Regarding threadpool version, I think it's quite easy to swap to. Performance-wise, it's really case-by-case. And @kohillyang 's suggestion is good to have, we can have both version ready without a lot of change. |
…iability (apache#13447) * fix recordio.py * rewrite dataloader with pool * fix batch as tuple * fix prefetching * fix pylint * picklable function * use pickle * add missing commit
…ile (apache#13478) * updated to v1.5.0 * Bumped minor version from 1.4.0 to 1.5.0 on master * added Anirudh as maintainer for R package ... adding something useful and re-trigger PR check * Updated license file for clojure, onnx-tensorrt, gtest, R-package * Get the correct include path in pip package (apache#13452) * add find_include_path API * address reviewer comment * change return type from list to string * add unit test * address reviewer comment * address reviewer comment * address reviewer comment * address reviewer comment * fix include path problem in pip package * add comment * fix lint error * address reviewer comment * address reviewer comment * Use ~/.ccache as default ccache directory so is not cache is not erased on reboot (apache#13431) * Skip flaky test apache#13446 (apache#13480) * Rewrite dataloader with process pool, improves responsiveness and reliability (apache#13447) * fix recordio.py * rewrite dataloader with pool * fix batch as tuple * fix prefetching * fix pylint * picklable function * use pickle * add missing commit * Fix errors in docstrings for subgraph op; use code directive (apache#13463) * [MXNET-1158] JVM Memory Management Documentation (apache#13105) * update train_mnist * Add documentation for JVM Memory Management * update doc * address nit picks * address nit picks * Grammar and clarity edits for memory management doc * Edits for scala memory management * Update memory-management.md * Update memory-management.md * Update memory-management.md * capitalization fix * Update row_sparse tutorial (apache#13414) Update row_sparse tutorial * Add resiliency to onnx export code (apache#13426) * Added resiliency to onnx export code - With previous infer-shape implementation, if input shape was list instead of tuple or if extra non-existent parameters were provided, the code would still work. The fixes in this commit make sure that behavior is restored to prevent any compatibility issues with existing export code. * Fixed name of net in unittest * Fix pylint * [MXNET-1185] Support large array in several operators (part 1) (apache#13418) * fix a few operators with large arrays (# of elements) * fix bug in broadcast_div and add tests * address reviewer comment * add unit test * add empty line * retrigger CI * [MXNET-1210 ] Gluon Audio - Example (apache#13325) * Initialized the example * Addressed PR comments, about existing synset.txt file - no overwrite * RST - docstring issues fixed * added README * Addressed PR comments * Addressed PR comments, checking Divide by 0 * Raising error if format is not supported. * changed a line for ndarray of labels * Trigger CI * Trigger CI * PR comments addressed around skip_header argument * Addressed PR comments around librosa import * PR Comments * Passing lazy=lazy from argument * Added PR comments, labels to README.MD * Trigger CI * Addressing PR Comments in README * Modified README.md * Added example under audio folder * Retrigger CI * Retrigger CI * ONNX export: Instance normalization, Shape (apache#12920) * ONNX import/export: Make backend_rep common * ONNX export: Instance Normalization * ONNX export: Shape operator * Clarify dependency on OpenCV in CNN Visualization tutorial. (apache#13495) * clarify ops faq regarding docs strings (apache#13492) * Add graph_compact operator. (apache#13436) * add graph_compact. * fix. * add doc. * add tests for graph_compact. * address comments. * update docs. * trigger CI * Deprecate Jenkinsfile (apache#13474) * update github location for sampled_block.py (apache#13508) Updated to https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/model/sampled_block.py * apache#13453 [Clojure] - Add Spec Validations to the Optimizer namespace (apache#13499) * ONNX export: Logical operators (apache#12852) * Fix cmake options parsing in dev_menu (apache#13458) Add GPU+MKLDNN unittests to dev_menu * Revert "Manually track num_max_thread (apache#12380)" (apache#13501) This reverts commit 7541021. * Feature/mkldnn static 2 (apache#13503) * build mkldnn as static lib * update makefile to statically build mkldnn * build static mkldnn * fix static name * fix static name * update static for mac * rename mkldnn dep in ci * remove moving mkldnn dynamic lib * remove commented code * remove mkldnn dnaymic for unitest * force static for mkldnn lib * remove dynamic mkldnn bind * only link windows * add mkldnn.mk * try force linking * remove mkldnn dynanmic check * remove test mkldnn install * fix spacing * fix index * add artifacts * add comment about windows * remove static * update makefile * fix toctree Sphinx errors (apache#13489) * fix toctree errors * nudging file for CI * Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (apache#13527) * [MXNET-1234] Fix shape inference problems in Activation backward (apache#13409) * Provide a failing test for ReLU activation shape inference bug * Fix Activation backward shape inference fixes: apache#13333 * Add softsign Activation to test_gluon.py * Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now * Don't disable MKLDNN
Description
Rewrite dataloader with multiprocess.pool, the logic of dataloader with
num_worker>0
is cleaner and more robust.Changes with this PR:
multiprocessing.Pool
, no matter how many epochs and iterators are launched, they share same process pool. The benefit is two fold: 1) It's faster, since new iterators will no longer trying to fork new processes 2) it's more reliable, the process pool is managed along with the entire life of dataloader, not iterators.prefetch=None
:The rest functionality and APIs are intact with this PR. No action is required for exisiting users.
Thanks @szha @leezu for the ideas.
@piiswrong @eric-haibin-lin @hetong007 @YutingZhang for review and questions
This PR may affect gluoncv and gluonnlp dataloader variants, I will address them separately.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments