-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big gap in experimental results #1
Comments
Hi, what version of the BRACS dataset of are using? |
I assume 566 samples is on the new version, which we haven't used for this work. Are you able to reproduce w/ the previous version? |
Maybe something to do with the preprocessing. You can download the preprocessed cell, tissue and hact graphs for the BRACS dataset here:
Or by downloading this zip file that includes the test cell graphs:
Let me know if you can reproduce with these ones |
When pre-processing images, some will cause a warning like this while the others won't: |
The warnings should not be an issue to run the preprocessing. |
Hello, may I ask where can I download pre-trained models? |
@guillaumejaume Would you know why a model trained on graphs preprocessed and created locally don't provide the same performance as the graphs you provide in the download link? |
I trained my own HACTNet, cell, and tissue graph models to see if it's cell or tissue graphs that's causing the performance gap. Based on my results, it appears that it's mainly the tissue graphs generated by this repo under stock settings that aren't as good for model performance as those uploaded by the histocartography team. Maybe there's something about Here are my weighted F1 scores on the test set for models trained on the IBM Box graph sets compared to those I created locally from the BRACS ROI previous version using
EDIT: Noticed that my script failed to create the last two dozen training graphs because of a corrupted RoI download. I finished creating my graphs, retrained the models, and updated my findings. |
Hi, I'm not able to reproduce the "Trained on uploaded" results, my results remain around 55, did you do hyperparameter tuning or simply follow the settings (learning rate, epochs, batch size) that are provided in README? |
I didn't do any hyperparamter tuning, I just use the config and settings as shown. I've noticed that test set accuracy tends to fluctuate up and down a few percentage points even with the same settings, so I think 55 is close enough. |
hello:
When using your pre-trained model, I found that your accuracy and F1 score cannot be achieved.
The results of the model using hact are as follows:
The results of the model using cggnn are as follows:
The results of these experiments are quite different from yours. I'm confused and don't know which phase went wrong.
The text was updated successfully, but these errors were encountered: