Skip to content

Robot Utility Models are trained on a diverse set of environments and objects, and then can be deployed in novel environments with novel objects without any further data or training.

License

Notifications You must be signed in to change notification settings

haritheja-e/robot-utility-models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

preview

Robot Utility Models

arXiv License PyTorch

Project webpage · Documentation · Paper

Authors: Haritheja Etukuru*, Norihito Naka, Zijin Hu, Seungjae Lee, Julian Mehu, Aaron Edsinger, Chris Paxton, Soumith Chintala, Lerrel Pinto, Nur Muhammad “Mahi” Shafiullah*

Open-source repository of the hardware and software components of Robot Utility Models.

what_is.mp4

Abstract

Robot models, particularly those trained with large amounts of data, have recently shown a plethora of real-world manipulation and navigation capabilities. Several independent efforts have shown that given sufficient training data in an environment, robot policies can generalize to demonstrated variations in that environment. However, needing to finetune robot models to every new environment stands in stark contrast to models in language or vision that can be deployed zero-shot for open-world problems. In this work, we present Robot Utility Models (RUMs), a framework for training and deploying zero-shot robot policies that can directly generalize to new environments without any finetuning. To create RUMs efficiently, we develop new tools to quickly collect data for mobile manipulation tasks, integrate such data into a policy with multi-modal imitation learning, and deploy policies on-device on Hello Robot Stretch, a cheap commodity robot, with an external mLLM verifier for retrying. We train five such utility models for opening cabinet doors, opening drawers, picking up napkins, picking up paper bags, and reorienting fallen objects. Our system, on average, achieves 90% success rate in unseen, novel environments interacting with unseen objects. Moreover, the utility models can also succeed in different robot and camera set-ups with no further data, training, or fine-tuning. Primary among our lessons are the importance of training data over training algorithm and policy class, guidance about data scaling, necessity for diverse yet high-quality demonstrations, and a recipe for robot introspection and retrying to improve performance on individual environments.

What's on this repo

  1. hardware contains our 3D printable STL files for the Stick V2, Hello Robot Stretch SE3, and UFactory xArm 7.
  2. imitation-in-homes contains code to download and load one of our robot utility models.
  3. robot-server contains code that is run on the robot to deploy the policy.

Paper

Paper Preview Get it from arXiv or our website.

Citation

If you find any of our work useful, please cite us!

@misc{etukuru2024robot,
      title={Robot Utility Models: General Policies for Zero-Shot Deployment in New Environments}, 
      author={Haritheja Etukuru and Norihito Naka and Zijin Hu and Seungjae Lee and Julian Mehu and Aaron Edsinger and Chris Paxton and Soumith Chintala and Lerrel Pinto and Nur Muhammad Mahi Shafiullah},
      year={2024},
      eprint={2409.05865},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

About

Robot Utility Models are trained on a diverse set of environments and objects, and then can be deployed in novel environments with novel objects without any further data or training.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published