-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any tricks to reproduce the performance? #7
Comments
Hi @mssjtxwd , This project aims at a prototype to be a baseline method for RibFrac Challenge; However, as we are the data provider for the challenge, we would like to avoid unintended data leakage. Therefore, we did not provide full details for the models, including those in both training and inference stages. Basically, the FracNet in our EBioMedicine is a one-stage model without false positive reduction stage. The performance in the main text is trained on RibFrac training set + 300 in-house cases, but we also provide the performance trained with public training set in the supplementary materials (pls refer to the paper). U may find it still works very well, which could be a top-ranking solution. As for the reproducible issue, I guess 2 possible options:
Sorry to be disapointing you, but if you want to reproduce the performance in the paper, u have to tune your model a little. However, it is gauranteed that the performance could be reproduced with this one-stage model using 3D UNet as backbone. Good luck! |
Hi @mssjtxwd As per @duducheng, the training configuration in our actual implementation is different from the one in the open-sourced code. There are a few details you may try:
|
my result is the same as yours ,do you reslove it? |
Excuse me, |
Hello, I am trying to reproduce your work with ribFrac data. From your README, I put the data in the specified directory and ran train.py directly for training,then I ran predict.py + eval.py to get the performance. But We can only achieve about 50 Recall @ 8FA on the validation set, which is significant different from the performance figure claimed by the 3rd place in the competition (the performance form on PPT shows they archived 90Recall @ 8FA on val set). Of course, the performance can be impacted by many factors, so I would like to ask if you can tell me what performance your method can achieve based on the ribFrac training set (can be the result of post-processing and TTA)
The text was updated successfully, but these errors were encountered: