Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.

How to apply R3M to downstream robotic task? #5

Closed
yquantao opened this issue Mar 29, 2022 · 2 comments
Closed

How to apply R3M to downstream robotic task? #5

yquantao opened this issue Mar 29, 2022 · 2 comments

Comments

@yquantao
Copy link

Hi,

in the paper, it was mentioned that when we apply R3M to downstream robot learning tasks, only a few demonstrations are needed. My question is that do we also need to add language in these demos? Or only video demos are needed?

@suraj-nair-1
Copy link
Contributor

No language is needed (only used during training), the pre-trained R3M model simply acts as an encoder mapping images to embeddings. So to train with imitation learning you just need data of (Image, action) pairs, and you can encode the images with R3M and train with your normal imitation learning loss. You can see an example of encoding a single image here

Best,
Suraj

@whc688
Copy link

whc688 commented Apr 26, 2023

@suraj-nair-1 Hi!
If i want to use frozen pre-trained R3M as an encoder to encode image, is there any kind of image normalization i need to apply on the image? and what should i apply on it?
Thanks a lot!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants