You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, thanks for releasing such an amazing dataset! Incredible work!
I’m currently training a model on your dataset for my engineering degree, and was wondering how you generate with Detectron2 the validation loss, as reported in the supplementary notes:
"All training was run for a predefined set of iterations and the loss on a
validation set, separate from the training and test sets, were monitored to assess model over- and under-fitting. Model checkpoints
were saved and used for evaluation based on which had the lowest validation loss (Supplementary Figs. 9 and 10) on the rationale
that the lowest validation loss represents a good balance between an under- and over-fitted model."
I am able to get a very similar loss curve for the training set. I used the validation set for monitoring the performance of the model for the segmentation task, and evaluated the trained model on the test set, but I don't know how you get the validation loss you show on your graphs.
Any hint would mean a lot. Thank you in advance,
Kind regards,
Matías Stingl
The text was updated successfully, but these errors were encountered:
Hi @mgstingl,
I am glad that you appreciate LIVECell!
Logging validation in loss in Detectron2 requires you to implement your own hooks. Preparing the manuscript, we trained models on our compute infrastructure where we have some custom tooling. When releasing the source code, we chose to only publish models that are runnable on "native" Detectron2 without our custom tools to make it easier to pick up on the work. The validation loss hook was unfortunately one of these.
However, you are not alone in wanting to log validation loss and you can find this issue discussing on how to implement it on Detectron2. I hope that is helpful.
Hello everyone,
First, thanks for releasing such an amazing dataset! Incredible work!
I’m currently training a model on your dataset for my engineering degree, and was wondering how you generate with Detectron2 the validation loss, as reported in the supplementary notes:
"All training was run for a predefined set of iterations and the loss on a
validation set, separate from the training and test sets, were monitored to assess model over- and under-fitting. Model checkpoints
were saved and used for evaluation based on which had the lowest validation loss (Supplementary Figs. 9 and 10) on the rationale
that the lowest validation loss represents a good balance between an under- and over-fitted model."
I am able to get a very similar loss curve for the training set. I used the validation set for monitoring the performance of the model for the segmentation task, and evaluated the trained model on the test set, but I don't know how you get the validation loss you show on your graphs.
Any hint would mean a lot. Thank you in advance,
Kind regards,
Matías Stingl
The text was updated successfully, but these errors were encountered: