-
-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validation during training: time incoherence #1010
Comments
The bug is in the def _peval(self, strategy):
for el in strategy._eval_streams:
strategy.eval(el) # <--- here It should be an easy fix, we'll notify you when is incorporated in the main branch. for experience in generic_scenario.train_stream:
n_exp = experience.current_experience
print("Start of experience: ", n_exp)
print("Current Classes: ", experience.classes_in_this_experience)
cl_strategy.train(experience, eval_streams = [], num_workers = 4)
cl_strategy.eval(generic_scenario.test_stream[0:n_exp+1], num_workers = 4)
print('Computed accuracy on the whole test set') should work with the same number of workers and the same speed between train and eval. Obviously is not a fix for your problem since the metrics are calculated only at the end of the experience and not after every epoch. |
Thank you again, @ggraffieti Yeah, I tried previously this code you proposed with and without passing the About the bug, it is possible it also raises in
|
You are right @PabloMese, good catch! |
I will give it a try, @ggraffieti. So we can now close this issue. Thanks for the help. |
Perfect @PabloMese! |
@PabloMese solved by the linked PR. |
Hi, everyone
I'm trying to run simultaneously the training and the validation of a CIL algorithm with
eval_every = 1
to get the accuracy and the loss for each epoch in the test set. This is the code is use. Note that I set num_workers = 4 in the train call.This is the problem I got. While the training iteration only lasts for 21'', the evaluation lasts for almost 3' when the size of the evaluation stream is 5x times shorter. I tried in both the beta version and the latest version but the same error was found for both.
The text was updated successfully, but these errors were encountered: