-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Fix Batch Norm Model flakeyness #31371
[RLlib] Fix Batch Norm Model flakeyness #31371
Conversation
@sven1977 This seems to get rid of flakeyness but might also get rid of testing learning here. Wdyt? |
rllib/examples/batch_norm_model.py
Outdated
"--time-total-s", | ||
type=float, | ||
default=60 * 60, | ||
help="Time after which we stop " "training in seconds.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo I'll fix after review 😓
Signed-off-by: Sven Mika <sven@anyscale.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, let's try this :)
…ect#31371) Signed-off-by: tmynn <hovhannes.tamoyan@gmail.com>
Signed-off-by: Artur Niederfahrenhorst artur@anyscale.com
Why are these changes needed?
Our batch norm model tests have been amongst the flakiest of RLLib tests.
This PR aims to reduce this flakiness.
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.