-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use your own pre-trained model for downstream segmentation #204
Comments
Hey, |
Ok, thank you very much for your reply. Next, I will try according to what you said. Thanks again |
Hello, |
Hey! |
Next, I will try to use data enhancement (the previous training was in accordance with the official default Settings). In the pre-training process, I found that without using normalize,MAE's reconstruction effect is better, and the loss is also very low (0.0822). When fine-tuning, I used mean and std which I used to fine-tune the data set. I found that this also affected the effectiveness of the downstream classification task. |
I would like to ask you a question, I pre-train on my medical image data set (6k) according to the official pre-training code, and then the downstream task is segmentation, but the fine-tuning results with my own pre-training model is always inferior to the author's pre-training model with imagenet, which is why, is my pre-training data is still too small?
The text was updated successfully, but these errors were encountered: