Skip to content

Best validation performance in multi-round training #1351

Answered by janfb
ali-akhavan89 asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @ali-akhavan89

thanks for the questions - very good ones!

  1. Indeed, as the loss function itself is different in the first round and in the following rounds, it is not comparable.
  2. Also here it is difficult to compare the validation performance between the rounds. First, the validation set is slightly different (each round new data sampled from the current proposal is added). Second, the proposals change with every round especially in the first rounds when we are still "zooming in" to the region conditioned on x_o.
  3. Given the above answers probably not. The validation performance is still useful though, e.g., for the early stopping criterion within a round and for comparing different neura…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@ali-akhavan89
Comment options

Answer selected by janfb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants