-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collecting/loading/processing batch from data with different resolution #8204
Comments
Hi @Masaaki-75, thanks for your interest here. Hope it helps, thanks. |
I see. Just tried it on my training pipeline and it worked! Better yet, it seems to be compatible with other popular pytorch libraries like |
Hi @KumoLiu , i am considering an advanced feature for data loading and processing: resizing (by cropping/padding/interpolation/..., whatever) the batch to different resolutions in each iteration. For example, given two preset resolutions like Are there any similar cases in MONAI tutorials or existing solutions to this problem? Thanks! |
Is your feature request related to a problem? Please describe.
I have collected a bunch of CT volumes, each is stored in
.nii.gz
format and is of different resolutions (in depth, height and width). Due to the different resolutions, when i was trying to load them using pytorch's default DataLoader, i had to set the batch size to 1, which can be inefficient and cannot fully utilize the GPU memory.Describe the solution you'd like
For efficient network training, i am expecting a certain class of Dataset/DataLoader that can load several volumes (with different resolutions) as a batch efficiently, and (randomly) crop them into the same-sized sub-volumes to feed in the network.
Are there any designs in MONAI that can address this?
The text was updated successfully, but these errors were encountered: