Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Iterations" param in run_fold #20

Open
LHJ97 opened this issue Apr 9, 2021 · 3 comments
Open

"Iterations" param in run_fold #20

LHJ97 opened this issue Apr 9, 2021 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@LHJ97
Copy link

LHJ97 commented Apr 9, 2021

From what I understand, the "iterations" parameter is the steps per epoch. In your script, you fixed that as 150. During my training, I tried to leave that as default, hence None, hoping that it will run through all the created patches. However, it showed 10 as the steps per epoch during the training. Does this mean the network only trains on 10*batch_size(=2) = 20 patches per epoch? This is way too few.

Do I misunderstand something here? Thanks a lot for your help!

@muellerdo
Copy link
Member

muellerdo commented Apr 10, 2021

Hey @LHJ97,

thanks for your interest.

Yes, your understanding is corect.
I asssume you train on the whole dataset instead of only the CV training subset?
If yes, than 20 samples / 2 (batch size) = 10 iterations (or steps per epoch).

Cheers,
Dominik

@muellerdo muellerdo self-assigned this Apr 10, 2021
@muellerdo muellerdo added the question Further information is requested label Apr 10, 2021
@LHJ97
Copy link
Author

LHJ97 commented Apr 28, 2021

Thanks for your reply! I've run some training these few weeks and I found out that the dice score is bad using a smaller patch for training. I tried to include a sf to resize the image to a smaller size, eg. 256x256 and use a patch of 64x64x32 so that it will fit in my gpu Quadro M5000 8Gb, so that I did not have to queue on my office gpu cluster. However, the dice score is really bad, around 0.53. Do you think a larger batch size will help? Or is there any way to obtain a fair performance with a smaller gpu?

Best wishes,
LHJ

@muellerdo
Copy link
Member

Hello @LHJ97,

this repository focuses more on the reproduction of our COVID-19 study.
For support on using MIScnn, please use the MIScnn repository.

To avoid redundant issues, here is the answer now:
I'm a little bit confused right now.
Do you have 2D or 3D data? If you have 3D data like CT, MRI or PET, then you have to perform a voxel space normalization via resampling. Only on 2D data, a resize is recommended (but be aware on aspect ratios).
To obtain better performance with smaller gpus on e.g. 3D data, use a patch size of 128^3 or 64^3 and a batch size of 2. You also also try to disable batch normalization on the standard U-net architecture.
As an alternative, it would make sense to think about using a 2D slicing approach instead of a 3D analysis. This will drastically reduce required GPU VRAM, but allow to use high resolution on the 2D images.

Cheers,
Dominik

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants