Habitat-Lab is a modular high-level library for end-to-end development in embodied AI -- defining embodied AI tasks (e.g. navigation, rearrangement, instruction following, question answering), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), and benchmarking their performance on the defined tasks using standard metrics.
Habitat-Lab uses Habitat-Sim
as the core simulator. For documentation refer here.
If you use the Habitat platform in your research, please cite the Habitat 1.0 and Habitat 2.0 papers:
@inproceedings{szot2021habitat,
title = {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
author = {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
@inproceedings{habitat19iccv,
title = {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
author = {Manolis Savva and Abhishek Kadian and Oleksandr Maksymets and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2019}
}
-
Preparing conda env
Assuming you have conda installed, let's prepare a conda env:
# We require python>=3.7 and cmake>=3.10 conda create -n habitat python=3.7 cmake=3.14.0 conda activate habitat
-
conda install habitat-sim
- To install habitat-sim with bullet physics
See Habitat-Sim's installation instructions for more details.
conda install habitat-sim withbullet -c conda-forge -c aihabitat
- To install habitat-sim with bullet physics
-
pip install habitat-lab stable version.
git clone --branch stable https://github.com/facebookresearch/habitat-lab.git cd habitat-lab pip install -e habitat-lab # install habitat_lab
-
Install habitat-baselines.
The command above will install only core of Habitat-Lab. To include habitat_baselines along with all additional requirements, use the command below after installing habitat-lab:
pip install -e habitat-baselines # install habitat_baselines
-
Let's download some 3D assets using Habitat-Sim's python data download utility:
-
Download (testing) 3D scenes:
python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path data/
Note that these testing scenes do not provide semantic annotations.
-
Download point-goal navigation episodes for the test scenes:
python -m habitat_sim.utils.datasets_download --uids habitat_test_pointnav_dataset --data-path data/
-
-
Non-interactive testing: Test the PointNav task: Run the example pointnav script
python examples/example_pointnav.py
which instantiates a PointNav agent in the testing scenes and episodes. The agent takes random actions and you should see something like:
[16:11:11:718584]:[Sim] Simulator.cpp(205)::reconfigure : CreateSceneInstance success == true for active scene name : data/scene_datasets/habitat-test-scenes/skokloster-castle.glb with renderer. 2022-08-13 16:26:45,068 Initializing task Nav-v0 Environment creation successful Agent stepping around inside environment. Episode finished after 5 steps.
-
Non-interactive testing: Test the Pick task: Run the example pick task script
python examples/example.py
which uses
habitat-lab/habitat/config/tasks/rearrange/pick.yaml
for configuration of task and agent.import habitat # Load embodied AI task (RearrangePick) and a pre-specified virtual robot env = habitat.Env( config=habitat.get_config("habitat-lab/habitat/config/tasks/rearrange/pick.yaml") ) observations = env.reset() # Step through environment with random actions while not env.episode_over: observations = env.step(env.action_space.sample())
This script instantiates a Pick agent in ReplicaCAD scenes. The agent takes random actions and you should see something like:
Agent acting inside environment. Renderer: AMD Radeon Pro 5500M OpenGL Engine by ATI Technologies Inc. Episode finished after 200 steps.
See
examples/register_new_sensors_and_measures.py
for an example of how to extend habitat-lab from outside the source code. -
Interactive testing: Using you keyboard and mouse to control a Fetch robot in a ReplicaCAD environment:
# Pygame for interactive visualization, pybullet for inverse kinematics pip install pygame==2.0.1 pybullet==3.0.4 # Interactive play script python examples/interactive_play.py --never-end --add-ik
Use I/J/K/L keys to move the robot base forward/left/backward/right and W/A/S/D to move the arm end-effector forward/left/backward/right and E/Q to move the arm up/down. The arm can be difficult to control via end-effector control. More details in documentation. Try to move the base and the arm to touch the red bowl on the table. Have fun!
Browse the online Habitat-Lab documentation and the extensive tutorial on how to train your agents with Habitat. For Habitat 2.0, use this quickstart guide.
We provide docker containers for Habitat, updated approximately once per year for the Habitat Challenge. This works on machines with an NVIDIA GPU and requires users to install nvidia-docker. To setup the habitat stack using docker follow the below steps:
-
Pull the habitat docker image:
docker pull fairembodied/habitat-challenge:testing_2022_habitat_base_docker
-
Start an interactive bash session inside the habitat docker:
docker run --runtime=nvidia -it fairembodied/habitat-challenge:testing_2022_habitat_base_docker
-
Activate the habitat conda environment:
conda init; source ~/.bashrc; source activate habitat
-
Run the testing scripts as above:
cd habitat-lab; python examples/example_pointnav.py
. This should print out an output like:Agent acting inside environment. Episode finished after 200 steps.
Common task and episode datasets used with Habitat-Lab.
Habitat-Lab includes reinforcement learning (via PPO) and classical SLAM based baselines. For running PPO training on sample data and more details refer habitat_baselines/README.md.
ROS-X-Habitat (https://github.com/ericchen321/ros_x_habitat) is a framework that bridges the AI Habitat platform (Habitat Lab + Habitat Sim) with other robotics resources via ROS. Compared with Habitat-PyRobot, ROS-X-Habitat places emphasis on 1) leveraging Habitat Sim v2's physics-based simulation capability and 2) allowing roboticists to access simulation assets from ROS. The work has also been made public as a paper.
Note that ROS-X-Habitat was developed, and is maintained by the Lab for Computational Intelligence at UBC; it has not yet been officially supported by the Habitat Lab team. Please refer to the framework's repository for docs and discussions.
Habitat-Lab is MIT licensed. See the LICENSE file for details.
The trained models and the task datasets are considered data derived from the correspondent scene datasets.
- Matterport3D based task datasets and trained models are distributed with Matterport3D Terms of Use and under CC BY-NC-SA 3.0 US license.
- Gibson based task datasets, the code for generating such datasets, and trained models are distributed with Gibson Terms of Use and under CC BY-NC-SA 3.0 US license.