Skip to content

Latest commit

 

History

History
278 lines (235 loc) · 14.7 KB

README.md

File metadata and controls

278 lines (235 loc) · 14.7 KB

Awesome Plasticity Loss

A collection of papers and codebases on plasticity loss accompanying our survey on the topic, with a focus on deep Reinforcement Learning. Papers are categorized by their approach to remedy plasticity loss. Inspired by similar repos on self-supervised-learning and in-context Reinforcement Learning.

Caution

Repo vs survey state

We aim to keep this repository up-to-date with the latest research on plasticity loss. This is more difficult for the accompanying survey, which is a snapshot of the field at the time of writing. If you are looking for the most recent research, please refer to the papers in this repository.

Contributing

Feel free to contribute either with a PR or by opening an issue.

Format for papers:

- Paper Name.
  [[pdf]](link)
  [[code]](link)
  - Author 1, Author 2, and Author 3. *Conference Year*

Table of Contents

Weight Resets

General resetting algorithms.

Non-targeted

Reset parameters of the network irrespective of their utility to the agent.

  • The Primacy Bias in Deep Reinforcement Learning. [pdf] [code]
    • Evgenii Nikishin, Max Schwarzer, Pierluca D’Oro, Pierre-Luc Bacon, Aaron Courville. ICML 2022
  • Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier. [pdf] [code]
    • Pierluca D'Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, Aaron Courville. ICLR 2023
  • DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization. [pdf] [code]
    • Guowei Xu, Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Zhecheng Yuan, Tianying Ji, Yu Luo, Xiaoyu Liu, Jiaxin Yuan, Pu Hua, Shuzhen Li, Yanjie Ze, Hal Daumé III, Furong Huang, Huazhe Xu. ICLR 2024

Targeted

Reset specific parameters/neurons of the network, usually based on some measure of utility.

  • Loss of plasticity in deep continual learning. [pdf] [code]
    • Shibhansh Dohare, J Fernando Hernandez-Garcia, Qingfeng Lan, Parash Rahman, A Rupam Mahmood, Richard S Sutton. Nature 2024
  • The Dormant Neuron Phenomenon in Deep Reinforcement Learning. [pdf] [code (jax)] [code (pytorch)]
    • Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, Utku Evci. ICML 2023
  • Addressing loss of plasticity and catastrophic forgetting in continual learning. [pdf] [code]
    • Mohamed Elsayed, A Rupam Mahmood. ICLR 2024
  • Deep Reinforcement Learning with Plasticity Injection. [pdf] [code]
    • Evgenii Nikishin, Junhyuk Oh, Georg Ostrovski, Clare Lyle, Razvan Pascanu, Will Dabney, Andre Barreto. NeurIPS 2023

Parameter Regularization

Regularize the parameters of the network towards values that are less prone to plasticity loss.

  • Maintaining Plasticity in Continual Learning via Regenerative Regularization. [pdf] [code]
    • Saurabh Kumar, Henrik Marklund, Benjamin Van Roy.
  • Towards Deeper Deep Reinforcement Learning with Spectral Normalization. [pdf]
    • Johan Bjorck, Carla P. Gomes, Kilian Q. Weinberger. NeurIPS 2021
  • Spectral Normalisation for Deep Reinforcement Learning: an Optimisation Perspective. [pdf] [code]
    • Florin Gogianu, Tudor Berariu, Mihaela Rosca, Claudia Clopath, Lucian Busoniu, Razvan Pascanu. ICML 2021
  • Weight Clipping for Deep Continual and Reinforcement Learning. [pdf] [code]
    • Mohamed Elsayed, Qingfeng Lan, Clare Lyle, A. Rupam Mahmood. RLC 2024
  • Directions of Curvature as an Explanation for Loss of Plasticity. [pdf]
    • Alex Lewandowski, Haruto Tanaka, Dale Schuurmans, Marlos C. Machado.
  • Learning Continually by Spectral Regularization. [pdf]
    • Alex Lewandowski, Saurabh Kumar, Dale Schuurmans, András György, Marlos C. Machado.

Feature Rank Regularization

Regularize the rank of the feature matrix either directly or indirectly.

  • Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning. [pdf] [code]
    • Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, Sergey Levine. ICLR 2021
  • Understanding and Preventing Capacity Loss in Reinforcement Learning. [pdf] [code]
    • Clare Lyle, Mark Rowland, Will Dabney. ICLR 2022
  • DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization. [pdf]
    • Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine. ICLR 2022
  • An empirical study of implicit regularization in deep offline RL. [pdf]
    • Caglar Gulcehre, Srivatsan Srinivasan, Jakub Sygnowski, Georg Ostrovski, Mehrdad Farajtabar, Matthew Hoffman, Razvan Pascanu, Arnaud Doucet. TMLR 2022
  • Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation. [pdf] [code]
    • Qiang He, Tianyi Zhou, Meng Fang, Setareh Maghsudi. ICLR 2024

Activation Functions

Proposed activation functions that mitigate plasticity loss.

  • An Evaluation of Parametric Activation Functions for Deep Learning. [pdf]
    • Luke B. Godfrey. IEEE SMC 2019
  • Loss of Plasticity in Continual Deep Reinforcement Learning. [pdf]
    • Zaheer Abbas, Rosie Zhao, Joseph Modayil, Adam White, Marlos C. Machado. CoLLAs 2023
  • Adaptive Rational Activations to Boost Deep Reinforcement Learning. [pdf] [code]
    • Quentin Delfosse, Patrick Schramowski, Martin Mundt, Alejandro Molina, Kristian Kersting. ICLR 2024
  • Hadamard Representations: Augmenting Hyperbolic Tangents in RL [pdf]
    • Jacob E. Kooi, Mark Hoogendoorn, Vincent François-lavet
  • Plastic Learning with Deep Fourier Features. [pdf]
    • Alex Lewandowski, Dale Schuurmans, Marlos C. Machado.

Categorical Losses

Project the scalar regression targets onto a categorical distribution to apply a cross-entropy loss.

  • Improving Regression Performance with Distributional Losses. [pdf] [code]
    • Ehsan Imani, Martha White. ICML 2018
  • Stop Regressing: Training Value Functions via Classification for Scalable Deep RL. [pdf]
    • Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taiga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, Aviral Kumar, Rishabh Agarwal . ICML 2024

Distillation

Periodically distill the knowledge of the network into a fresh network.

  • Transient Non-Stationarity and Generalisation in Deep Reinforcement Learning. [pdf] [code]
    • Maximilian Igl, Gregory Farquhar, Jelena Luketina, Wendelin Boehmer, Shimon Whiteson. ICLR 2021
  • Slow and Steady Wins the Race Maintaining Plasticity with Hare and Tortoise Networks. [pdf] [code]
    • Hojoon Lee, Hyunseo Cho, Donghu Kim, Hyunseung Kim, Dukgi Min, Jaegul Choo, and Clare Lyle. ICML 2024

Architectures

Specific architectures or interventions on the architecture (e.g., pruning) that mitigate plasticity loss.

  • Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control. [pdf] [code]
    • Michal Nauman, Mateusz Ostaszewski, Krzysztof Jankowski, Piotr Miłoś, Marek Cygan. NeurIPS 2024
  • SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning. [pdf] [code]
    • Hojoon Lee, Dongyoon Hwang, Donghu Kim, Hyunseung Kim, Jun Jet Tai, Kaushik Subramanian, Peter R. Wurman, Jaegul Choo, Peter Stone, Takuma Seno.
  • In value-based deep reinforcement learning, a pruned network is a good network. [pdf] [code]
    • Johan Obando-Ceron, Aaron Courville, Pablo Samuel Castro. ICML 2024
  • Neuroplastic Expansion in Deep Reinforcement Learning. [pdf]
    • Jiashun Liu, Johan Obando-Ceron, Aaron Courville, Ling Pan.
  • Mixtures of Experts Unlock Parameter Scaling for Deep RL. [pdf] [code]
    • Johan Obando-Ceron, Ghada Sokar, Timon Willi, Clare Lyle, Jesse Farebrother, Jakob Foerster, Gintare Karolina Dziugaite, Doina Precup, Pablo Samuel Castro. ICML 2024

Other Approaches and Papers

Methods that do not fit in the previous categories.

  • Is High Variance Unavoidable in RL? A Case Study in Continuous Control. [pdf]
    • Johan Bjorck, Carla P. Gomes, Kilian Q. Weinberger. ICLR 2022
  • Resetting the Optimizer in Deep RL: An Empirical Study. [pdf]
    • Kavosh Asadi, Rasool Fakoor, Shoham Sabach. NeurIPS 2023
  • Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages. [pdf] [code]
    • Guozheng Ma, Lu Li, Sen Zhang, Zixuan Liu, Zhen Wang, Yixin Chen, Li Shen, Xueqian Wang, Dacheng Tao. ICLR 2024
  • Sharpness-Aware Minimization for Efficiently Improving Generalization. [pdf] [code]
    • Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur. ICLR 2021
  • Harnessing Discrete Representations For Continual Reinforcement Learning. [pdf] [code]
    • Edan Meyer, Adam White, Marlos C. Machado.
  • A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning. [pdf] [code]
    • Arthur Juliani, Jordan T. Ash. NeurIPS 2024

Combined Methods

Combinations of the previous methods.

  • PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning. [pdf] [code]
    • Methods: SAM, LayerNorm, CReLU, Hard head resets
    • Hojoon Lee, Hanseul Cho, Hyunseung Kim, Daehoon Gwak, Joonkee Kim, Jaegul Choo, Se-Young Yun, Chulhee Yun. NeurIPS 2023
  • Bigger, Better, Faster: Human-level Atari with human-level efficiency. [pdf] [code]
    • Methods: Shrink & Perturb CNN resets, Hard head resets, Weight decay
    • Max Schwarzer, Johan Obando-Ceron, Aaron Courville, Marc Bellemare, Rishabh Agarwal, Pablo Samuel Castro. ICML 2023
  • Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning. [pdf]
    • Methods: LayerNorm, Weight decay / Hard resets, Weight decay / LayerNorm, Hard resets
    • Michal Nauman, Michał Bortkiewicz, Piotr Miłoś, Tomasz Trzciński, Mateusz Ostaszewski, Marek Cygan. ICML 2024
  • Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control. [pdf] [code]
    • Methods: Hard resets, LayerNorm, Weight decay
    • Michal Nauman, Mateusz Ostaszewski, Krzysztof Jankowski, Piotr Miłoś, Marek Cygan. NeurIPS 2024
  • Disentangling the Causes of Plasticity Loss in Neural Networks. [pdf]
    • Methods: LayerNorm, Weight decay
    • Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, Will Dabney.
  • Normalization and effective learning rates in reinforcement learning. [pdf]
    • Methods: LayerNorm, Parameter regularization
    • Clare Lyle, Zeyu Zheng, Khimya Khetarpal, James Martens, Hado van Hasselt, Razvan Pascanu, Will Dabney.
  • Understanding plasticity in neural networks. [pdf]
    • Methods: LayerNorm, Weight decay, Categorical loss
    • Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, Will Dabney. ICML 2023

Citation

If you find this repository useful, please consider citing our survey paper:

@article{klein2024plasticity,
  title={Plasticity Loss in Deep Reinforcement Learning: A Survey},
  author={Klein, Timo and Miklautz, Lukas and Sidak, Kevin and Plant, Claudia and Tschiatschek, Sebastian},
  journal={arXiv e-prints},
  pages={arXiv--2411},
  year={2024}
}