You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I assume that the action space of the Feedback policy is defined relative to the gripper frame coordinate. During training, five frames are selected, with the last frame serving as the goal and the other frames as the current state for action prediction. However, why is the rel_action of current frame considered the ground truth? Isn’t rel_action supposed to represent the end-effector state of the current frame? I would greatly appreciate it if you could help clarify this.
The text was updated successfully, but these errors were encountered:
Hi, I assume that the action space of the Feedback policy is defined relative to the gripper frame coordinate. During training, five frames are selected, with the last frame serving as the goal and the other frames as the current state for action prediction. However, why is the
rel_action
of current frame considered the ground truth? Isn’trel_action
supposed to represent the end-effector state of the current frame? I would greatly appreciate it if you could help clarify this.The text was updated successfully, but these errors were encountered: