Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training vs. Validation loss, as reported in the article #22

Open
mgstingl opened this issue Nov 8, 2022 · 1 comment
Open

Training vs. Validation loss, as reported in the article #22

mgstingl opened this issue Nov 8, 2022 · 1 comment

Comments

@mgstingl
Copy link

mgstingl commented Nov 8, 2022

Hello everyone,

First, thanks for releasing such an amazing dataset! Incredible work!

I’m currently training a model on your dataset for my engineering degree, and was wondering how you generate with Detectron2 the validation loss, as reported in the supplementary notes:
"All training was run for a predefined set of iterations and the loss on a
validation set, separate from the training and test sets, were monitored to assess model over- and under-fitting. Model checkpoints
were saved and used for evaluation based on which had the lowest validation loss (Supplementary Figs. 9 and 10) on the rationale
that the lowest validation loss represents a good balance between an under- and over-fitted model."

I am able to get a very similar loss curve for the training set. I used the validation set for monitoring the performance of the model for the segmentation task, and evaluated the trained model on the test set, but I don't know how you get the validation loss you show on your graphs.
image

Any hint would mean a lot. Thank you in advance,

Kind regards,

Matías Stingl

@RickardSjogren
Copy link
Contributor

Hi @mgstingl,
I am glad that you appreciate LIVECell!

Logging validation in loss in Detectron2 requires you to implement your own hooks. Preparing the manuscript, we trained models on our compute infrastructure where we have some custom tooling. When releasing the source code, we chose to only publish models that are runnable on "native" Detectron2 without our custom tools to make it easier to pick up on the work. The validation loss hook was unfortunately one of these.

However, you are not alone in wanting to log validation loss and you can find this issue discussing on how to implement it on Detectron2. I hope that is helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants