You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From what I understand, the "iterations" parameter is the steps per epoch. In your script, you fixed that as 150. During my training, I tried to leave that as default, hence None, hoping that it will run through all the created patches. However, it showed 10 as the steps per epoch during the training. Does this mean the network only trains on 10*batch_size(=2) = 20 patches per epoch? This is way too few.
Do I misunderstand something here? Thanks a lot for your help!
The text was updated successfully, but these errors were encountered:
Yes, your understanding is corect.
I asssume you train on the whole dataset instead of only the CV training subset?
If yes, than 20 samples / 2 (batch size) = 10 iterations (or steps per epoch).
Thanks for your reply! I've run some training these few weeks and I found out that the dice score is bad using a smaller patch for training. I tried to include a sf to resize the image to a smaller size, eg. 256x256 and use a patch of 64x64x32 so that it will fit in my gpu Quadro M5000 8Gb, so that I did not have to queue on my office gpu cluster. However, the dice score is really bad, around 0.53. Do you think a larger batch size will help? Or is there any way to obtain a fair performance with a smaller gpu?
this repository focuses more on the reproduction of our COVID-19 study.
For support on using MIScnn, please use the MIScnn repository.
To avoid redundant issues, here is the answer now:
I'm a little bit confused right now.
Do you have 2D or 3D data? If you have 3D data like CT, MRI or PET, then you have to perform a voxel space normalization via resampling. Only on 2D data, a resize is recommended (but be aware on aspect ratios).
To obtain better performance with smaller gpus on e.g. 3D data, use a patch size of 128^3 or 64^3 and a batch size of 2. You also also try to disable batch normalization on the standard U-net architecture.
As an alternative, it would make sense to think about using a 2D slicing approach instead of a 3D analysis. This will drastically reduce required GPU VRAM, but allow to use high resolution on the 2D images.
From what I understand, the "iterations" parameter is the steps per epoch. In your script, you fixed that as 150. During my training, I tried to leave that as default, hence None, hoping that it will run through all the created patches. However, it showed 10 as the steps per epoch during the training. Does this mean the network only trains on 10*batch_size(=2) = 20 patches per epoch? This is way too few.
Do I misunderstand something here? Thanks a lot for your help!
The text was updated successfully, but these errors were encountered: