OfflineRL-Lib provides unofficial and benchmarked PyTorch implementations for selected OfflineRL algorithms, including:
- In-Sample Actor Critic (InAC)
- Extreme Q-Learning (XQL)
- Implicit Q-Learning (IQL)
- Decision Transformer (DT)
- Advantage-Weighted Actor Critic (AWAC)
- TD3-BC
- TD7
For Model-Based algorithms, please check OfflineRL-Kit!
- We benchmark and visualize the result via WandB. Click the following WandB links, and group the runs via the entry
task
(for offline experiments) orenv
(for online experiments). - Available Runs
If you use OfflineRL-Lib in your work, please use the following bibtex
@software{
offlinerllib,
author = {Gao, Chen-Xiao and Rui, Kong},
month = feb,
title = {{OfflineRL-Lib: Benchmarked Implementations of Offline RL Algorithms}},
url = {https://github.com/typoverflow/OfflineRL-Lib},
version = {0.1.5},
year = {2023}
}
We thank CORL for providing finetuned hyper-parameters.