-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shape problem #14
Comments
Hi @domadaaaa , you need to change the window size for TransMorph, ViT-V-Net shouldn't need any change: def load_model(img_size):
if args.model == 'TransMorph':
config = CONFIGS_TM['TransMorph']
if args.dataset == 'LPBA':
config.img_size = img_size
config.window_size = (5, 6, 5, 5)
elif args.dataset == 'AbdomenCTCT':
config.img_size = img_size
# config.window_size = (5, 6, 8, 8)
model = TransMorph(config)
elif args.model == 'VoxelMorph':
model = VoxelMorph(img_size)
elif args.model == 'ViTVNet':
config = CONFIGS_ViT['ViT-V-Net']
model = ViTVNet(config, img_size=img_size)
model.cuda()
return model You should also change the if args.dataset == 'AbdomenCTCT':
img_size = (192, 160, 256)
else:
img_size = (160, 192, 160) if args.dataset == 'LPBA' else (160, 192, 224) I suggest you experiment a bit with TransMorph's window size to get it right, or you might get a shape mismatch error when running the model. |
the dimension of y is 208 not 192. Should I modify the model?or just modify configuration |
Modifying the two blocks I mentioned before should be sufficient. |
thank u,I have another question. Why should the learning rate of the OFG stage be set to 0.1 and the learning rate of the Adam optimizer not be set to a low point? Will this cause training oscillations and what is the significance |
I'm not quite sure what you mean by not setting the learning rate of the adam optimizer to a low point. In the default setting for training, we set the learning rate for the OFG module to 0.1, and the adam optimizer used to update the model's parameter to 1e-4. Please note there are two adam optimizers in this training framework, one is used in the OFG module, the other is used to update the registration model's parameters. The reason for using a big learning rate for the OFG module is that the optimizer is only used for a limited step, for example, 10 steps, it needs a big learning rate in order to reflect a significant enough difference on the deformation field. In our experiments, we did not observed any significant oscillations during training. It is less stable at the start compared to unsupervised training, but within normal behaviour. |
I understand. May I ask if in multimodal registration tasks, the loss of the model also uses the MSE loss of the original deformation field and the optimized deformation field, and does the regularization term of the loss need to be changed? |
For different modalities (other than MRI which we used in our experiments), in our preliminary experiments, our method is generally robust to various settings, and MSE generally works well with different modalities, as well as the regularization terms and the NCC loss used in the OFG module. However, for multi-modal tasks such as registration between MRI and CT data, some adjustment might be needed. This is because CT data uses different value ranges (not from 0 to 1 like in MRI), it might cause gradient problems if the raw CT data is not processed correctly. Theoretically, the default configuration should work ok given that the data preprocessing is done correctly. |
If you are going to conduct some experiments on this, please kindly share your results with us with a new issue or pull request, as this aspect was not tested extensively during our main work. In addition, you may also consider replacing the NCC loss in the OFG module with some other loss such as SSIM for different modalities. |
thank u, I'm willing to share some experimental resultsIf I have obtained some results, but still encounter some problems, such as using VoxelMorph as the benchmark model, adding the ofg module to verify that the loss convergence is slow and accompanied by a certain amplitude of oscillation. |
by the way,my total epoch is set to 200,and weight_opt = [1, 0.05], weight_model = [1, 0.02] |
I recommend a setting of 5 to 10 optimizing iterations, with the optimizer's weight of 1:1, i.e., |
Okay, what do you mean is that the difficulty in verifying the accuracy of DSC convergence is related to the weight of Reg in weight_opt? I will adjust weight_opt as the default parameter in the next experiment |
It is likely the case. If the deformation field is not properly regularized during training, it might cause training stability issues, in other words, the difference before and after optimization might be too big for the model to learn in a meaningful way. A way of making sure of this is to save some intermediate deformation field and registration results during training to see if the model is actually producing expected results. |
Does your processing here mean setting the pixel values of CT modal images to a range of 0-1 as well? |
submit a bug,in your new code OFGLoss.py ——> class DeformationOptimizer(nn.Module):
` |
Yes |
Got it, thank you for bringing this up. Will change this soon. |
Bug fixed in commit 60c9ab8. |
thanks!O(∩_∩)O~~ |
作者您好,请问您这篇论文投了什么期刊呢 |
目前还在审稿所以不方便透露 |
作者您好,我对文章中的self-training和optimized self-training不太理解,请问和OFG有什么区别吗 |
how to change 'VIT-V-NET' and 'TransMorph' if shape of input NIFTI data is (192,208,176)?
The text was updated successfully, but these errors were encountered: