Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: eval iterates over model_config_map but only takes checkpoint path for single model #353

Open
NickleDave opened this issue Apr 21, 2021 · 3 comments
Assignees
Labels
BUG Something isn't working high priority

Comments

@NickleDave
Copy link
Collaborator

this only works by accident, because we're usually only training one model, but it will be a problem when trying to compare multiple models

@NickleDave NickleDave added the BUG Something isn't working label Apr 21, 2021
@NickleDave NickleDave self-assigned this Apr 21, 2021
@NickleDave
Copy link
Collaborator Author

@NickleDave
Copy link
Collaborator Author

a simple fix for now would be to have core.eval accept a single model, and then have core.learncurve get models_map using model_config_map and iterate over that models_map instead

probably core.predict should behave the same way -- just accept a single model

@NickleDave
Copy link
Collaborator Author

one reason for the current design is that models specify an input shape, and we can't get that input shape until we instantiate a dataset inside predict or eval. So we can't just get models_map outside those functions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BUG Something isn't working high priority
Projects
None yet
Development

No branches or pull requests

1 participant