Skip to content

Fancy Gym: Unifying interface for various RL benchmarks with support for Black Box approaches.

License

Notifications You must be signed in to change notification settings

ALRhub/fancy_gym

Repository files navigation




Fancy Gym

Built upon the foundation of Gymnasium (a maintained fork of OpenAI’s renowned Gym library) fancy_gym offers a comprehensive collection of reinforcement learning environments.

Key Features:

  • New Challenging Environments: fancy_gym includes several new environments (Panda Box Pushing, Table Tennis, etc.) that present a higher degree of difficulty, pushing the boundaries of reinforcement learning research.
  • Support for Movement Primitives: fancy_gym supports a range of movement primitives (MPs), including Dynamic Movement Primitives (DMPs), Probabilistic Movement Primitives (ProMP), and Probabilistic Dynamic Movement Primitives (ProDMP).
  • Upgrade to Movement Primitives: With our framework, it’s straightforward to transform standard Gymnasium environments into environments that support movement primitives.
  • Benchmark Suite Compatibility: fancy_gym makes it easy to access renowned benchmark suites such as DeepMind Control and Metaworld, whether you want to use them in the regular step-based setting or using MPs.
  • Contribute Your Own Environments: If you’re inspired to create custom gym environments, both step-based and with movement primitives, this guide will assist you. We encourage and highly appreciate submissions via PRs to integrate these environments into fancy_gym.

Quickstart Guide

⚠ We recommend installing fancy_gym into a virtual environment as provided by venv, Poetry or Conda.

Install via pip or use an alternative installation method

    pip install 'fancy_gym[all]'

Try out one of our step-based environments or explore our other envs

   import gymnasium as gym
   import fancy_gym
   import time

   env = gym.make('fancy/BoxPushingDense-v0', render_mode='human')
   observation = env.reset()
   env.render()

   for i in range(1000):
      action = env.action_space.sample() # Randomly sample an action
      observation, reward, terminated, truncated, info = env.step(action)
      time.sleep(1/env.metadata['render_fps'])

      if terminated or truncated:
            observation, info = env.reset()

Explore the MP-based variant or learn more about Movement Primitives (MPs)

   import gymnasium as gym
   import fancy_gym

   env = gym.make('fancy_ProMP/BoxPushingDense-v0', render_mode='human')
   env.reset()
   env.render()

   for i in range(10):
      action = env.action_space.sample() # Randomly sample MP parameters
      observation, reward, terminated, truncated, info = env.step(action) # Will execute full trajectory, based on MP
      observation = env.reset()

Documentation

Documentation for fancy_gym can be found here; Usage Examples can be found here.

Citing the Project

To cite this repository in publications:

@software{fancy_gym,
	title = {Fancy Gym},
	author = {Otto, Fabian and Celik, Onur and Roth, Dominik and Zhou, Hongyi},
	abstract = {Fancy Gym: Unifying interface for various RL benchmarks with support for Black Box approaches.},
	url = {https://github.com/ALRhub/fancy_gym},
	organization = {Autonomous Learning Robots Lab (ALR) at KIT},
}

Icon Attribution

The icon is based on the Gymnasium icon as can be found here.