UNet does not work for my own data #1630
-
Hello, I modified code from the [brats_segmentation_3d] tutorial with my own data (2 input MRI series and 11-class output segmentation) the code worked fine in the original brat tutorial and brat data, but showed error I generated a train_loader_sample with
and created the model as the tutorial:
and showed
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Hi, the problem is the depth of your image being just 8. This doesn't allow for the number of convolutions you have requested. With a bigger image it works fine: import torch
from monai.networks.nets import UNet
device = torch.device("cuda:0")
model = UNet(
dimensions=3,
in_channels=2,
out_channels=11,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
model(torch.rand((2,2,400,400,16), device=device)) Or with a smaller network, your original image works fine, too: import torch
from monai.networks.nets import UNet
device = torch.device("cuda:0")
model = UNet(
dimensions=3,
in_channels=2,
out_channels=11,
channels=(16, 32, 64, 128),
strides=(2, 2, 2),
num_res_units=2,
).to(device)
model(torch.rand((2,2,400,400,8), device=device)) |
Beta Was this translation helpful? Give feedback.
-
I am running into another issue on a related note. I am trying to use UNet model tutorial given by applying to my own data. However, I am running into this error `--------------------------------------------------------------------------- ~/monai_code/MONAI/monai/metrics/meandice.py in call(self, y_pred, y) ValueError: y should be a binarized tensor.` On debugging, I realized that in the given tutorial, seg had all values between 0 and 1 and hence dice_metric converts the tensor to a byte. However, the seg mask that we have in our data has values of more than 1. What is the best method to normalize the label in Monai platform? |
Beta Was this translation helpful? Give feedback.
Hi, the problem is the depth of your image being just 8. This doesn't allow for the number of convolutions you have requested.
With a bigger image it works fine:
Or with a smaller network, your original image works fine, too: