Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large model evaluation #29

Open
naveenram00 opened this issue Jul 30, 2021 · 0 comments
Open

Large model evaluation #29

naveenram00 opened this issue Jul 30, 2021 · 0 comments

Comments

@naveenram00
Copy link
Collaborator

Right now the model evaluation script feeds in the checkpoints one at a time based on the range [FLAGS.eval_start, 999901 + FLAGS.Steps]. This works for the base size because the base size model starts at 999900 steps for finetuning. This starting value is different for some of the other sizes of the model so the code needs to be modified to run evaluation over the right range for those models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant