Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I perform only validation without training #2481

Closed
xinfangliu opened this issue Jul 3, 2020 · 9 comments
Closed

How can I perform only validation without training #2481

xinfangliu opened this issue Jul 3, 2020 · 9 comments
Labels
question Further information is requested won't fix This will not be worked on

Comments

@xinfangliu
Copy link

❓ Questions and Help

It seems the metric 0.8737 in the checkpoint 'm10-f1_1=0.8737.ckpt' can not be found in progress_bar,
I want to load the .ckpt to perform validation without training,
How should I config the trainer?

@xinfangliu xinfangliu added the question Further information is requested label Jul 3, 2020
@awaelchli
Copy link
Contributor

The progress bar averages the values over time.
To test your model on the validation set you can do

model = ...
trainer = Trainer(...)
trainer.test(model, ckpt_path="path/to/m10-f1_1=0.8737.ckpt", test_dataloaders=model.val_dataloader())

@xinfangliu
Copy link
Author

I'm afraid not. This code does not call validation_ step() and validation_ epoch_ end() and it can not calc the metric either

@awaelchli
Copy link
Contributor

You need to implement test_step and test_epoch end. If testing does not differ from your validation, you can just call validstion_step from test_step.

@rohitgr7
Copy link
Contributor

rohitgr7 commented Jul 3, 2020

@awaelchli What if they(testing and validation procedures) are different or don't have any test_step or test_epoch_end or just want to use test_step and test_epoch_end to save the outputs of the model in a file. I suggest there should be something like .evaluate() or .validate() just for validation_step and validation_epoch_end with a same api as that of .test(). Thoughts??

@awaelchli
Copy link
Contributor

I think we need to wait for #2107 for this to be possible. evaluate() would have to be different from the validation loop that runs during training, for example, it should not invoke callbacks like early stopping or checkpointing.

@ghost
Copy link

ghost commented Jul 6, 2020

I thought its not allowed? I remember seeing it here

@stale
Copy link

stale bot commented Sep 4, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Sep 4, 2020
@stale stale bot closed this as completed Sep 13, 2020
@Code-Cornelius
Copy link
Contributor

Code-Cornelius commented Dec 3, 2021

what about this:

"You can also run just the validation loop on your validation dataloaders by overriding validation_step() and calling validate()."

model = Model()
trainer = Trainer()
trainer.validate(model)

from https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html ; validation loop.

Also there:

"Validation

You can perform an evaluation epoch over the validation set, outside of the training loop, using pytorch_lightning.trainer.trainer.Trainer.validate(). This might be useful if you want to collect new metrics from a model right at its initialization or after it has already been trained."

trainer.validate(dataloaders=val_dataloaders)

https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#validation

@awaelchli
Copy link
Contributor

Yep! Nice find @Code-Cornelius. Since the original posting of this issue, we introduced Trainer.validate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested won't fix This will not be worked on
Projects
None yet
Development

No branches or pull requests

4 participants