Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically check training and eval mode differences #165

Open
dxoigmn opened this issue Jun 8, 2023 · 1 comment
Open

Automatically check training and eval mode differences #165

dxoigmn opened this issue Jun 8, 2023 · 1 comment

Comments

@dxoigmn
Copy link
Contributor

dxoigmn commented Jun 8, 2023

Model authors sometimes use nn.Module.training to change the control flow of their model. This is problematic because we often make the assumption a model in training more or less produces the same result as in eval mode. We should detect when this is not the case and warn the user so they can take appropriate action!

@dxoigmn
Copy link
Contributor Author

dxoigmn commented Jun 8, 2023

I will that it is also possible to change control on other things like whether groundtruth is present or not. Detecting this kind of behavior should probably be in scope too. This is done, for example, in this implementation of YOLOv4:
https://github.com/AlexeyAB/Yet-Another-YOLOv4-Pytorch/blob/d80d6a20372598b6306b37218cb61533e8bd9592/model.py#L893

Thankfully that code doesn't actually change anything about the output just whether it computes a loss or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant