-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow train.py-like config for eval.py #1351
Conversation
This looks reasonable to me. Want to see if Milo has any thoughts on how to make this nicer with the config_utils.py he added. (Pinged Milo bc I can't add him as a reviewer for some reason) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, I like that we don't have to modify the EvalConfig
dataclass and the transformation happens at a lower level. One question is whether the logged config should have the original syntax of the file or not?
I lean towards yes, in which case you may want to make a transform
that's passed to make_dataclass_and_log_config
which packages your transformation into a function which is applied only to the eval config and not to the logged config.
I think it's fairly important that the config that gets logged is exactly equal to the config specified in the file.
You can pass a function to make_dataclass_and_log_config
as specified here which does your config transformation.
here is a good example of a config transform, I think this PR can be formatted in the same way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, could go either way on milo's suggestion
I like @milocress suggestion, will retry to rework it as a config transform. Some clarifications:
|
The existing config transforms were added for train (and will all be applied for train) so i'd probably just make a function for now and not register it. We can revisit later if there is reason to. and id suggest not conditionally applying the transform, but rather the transform itself handling the if model in config check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, please add a manual test run in the pr description showing the functionality works
Updated the PR with a run that completed successfully with this transformation |
Currently
train.py
requires a config likewhereas
eval.py
requires:This PR allows the user to run eval.py using the train.py syntax which is easier when editing yamls directly.
I tried to make the implementation fully backwards compatible and cover all edge cases (both keys specified, missing top level keys like tokenizer, &c)
EDIT: Run
debug-eval-llama-3x-8b-g16-d10-B8JDJd
completed sucessfully and the config usedmodel
andtokenizer
directly