You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.
Hi, I am interested in this work. In the issues #5, you mentioned that the pre-trained R3M model simply acts as an encoder mapping images to embeddings. My question is how to use the whole framework in the downstream robotic task after behavior cloning? Just given an image, then the robot can do the imitated work? How about in a more sophisticated environment? How can the robot achieve a certain task from man without language?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi, I am interested in this work. In the issues #5, you mentioned that the pre-trained R3M model simply acts as an encoder mapping images to embeddings. My question is how to use the whole framework in the downstream robotic task after behavior cloning? Just given an image, then the robot can do the imitated work? How about in a more sophisticated environment? How can the robot achieve a certain task from man without language?
The text was updated successfully, but these errors were encountered: