Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance drop when resumed training with the empirical normalization #17

Open
BIGheadLL opened this issue Nov 20, 2023 · 1 comment

Comments

@BIGheadLL
Copy link

Hi there,

We noticed a performance drop when we resumed training with OnPolicyRunner which applied empirical normalization in our env.
Screenshot from 2023-11-20 11-36-32
There is a gap between the black line and blue one.
Additionally, we found the model performance cannot increase without the empirical normalization (The green and orange ones).

Many thanks.

@Mayankm96
Copy link
Member

When you resume training, typically the episodes are "terminated" randomly to encourage a diverse set of sample collection. Otherwise PPO can get stuck in a local minima.

https://github.com/leggedrobotics/rsl_rl/blob/master/rsl_rl/runners/on_policy_runner.py#L67

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants