-
Notifications
You must be signed in to change notification settings - Fork 5
Running on Robot
It is recommended that before running on the robot, you should test the setup using one of the bag file methods below
- Download the bagfile from this link and put it in a local folder.
- Running this bagfile requires YCB Video object models. If you don't have the YCB Video Dataset already downloaded :
- Create a folder YCB_Video_Dataset in your local dataset folder.
- Download the YCB Video object models from this link the place the downloaded models folder into the YCB_Video_Dataset follder.
- Create a visualization directory
<local path to perception repo>/sbpl_perception/visualization
to store the outputs. - Run the Docker image and build the code.
- Launch the PERCH 2.0 ROS node that listens for the topics from the rosbag :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; roslaunch object_recognition_node pr2_conveyor_object_recognition.launch
- Time ~ 4s
- For running PERCH CPU Version, launch the node with the CPU launch file :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; roslaunch object_recognition_node pr2_conveyor_object_recognition_cpu.launch
- Time ~ 75s
- Start playing the bag file :
rosbag play 3dof_1_2020-03-04-14-17-00.bag
- Publish the labels of requested objects on the topic :
# For GPU rostopic pub /requested_object std_msgs/String "data: '004_sugar_box 005_tomato_soup_can 002_master_chef_can 006_mustard_bottle 010_potted_meat_can'" # For CPU (test with one object first) rostopic pub /requested_object std_msgs/String "data: '004_sugar_box'"
- You can check the output in the
<local path to perception repo>/sbpl_perception/visualization
folder and on RVIZ (use config file in this path -<local path to perception repo>/object_recognition_node/rviz/pr2_conveyor.rviz
) - The input topic specs such as topics for point clouds etc. are present in the launch file. Other config files with parameters :
# Contains parameters settings related to PERCH 2.0 code (GPU) # Note that use_downsampling should be true so that input cloud can be downsampled <local path to perception repo>/sbpl_perception/config/pr2_gpu_robot_conv_env_config.yaml # Contains parameters settings related to PERCH code (CPU) <local path to perception repo>/sbpl_perception/config/pr2_conv_env_config_cpu.yaml # Contains the list of objects in the bank and path to their models <local path to perception repo>/sbpl_perception/config/ycb_objects.xml # Contains the PR2 Kinect camera intrinsic parameters <local path to perception repo>/sbpl_perception/config/camera_config.yaml
-
This bag file uses the 3D model of a crate which can be downloaded by downloading the SameShape dataset from this link. Place it in your local datasets folder after download.
-
Create a visualization directory
<local path to perception repo>/sbpl_perception/visualization
to store the outputs. -
Run the Docker image and build the code.
-
Launch the PERCH 2.0 ROS node that listens for the topics from the rosbag :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; roslaunch object_recognition_node roman_object_recognition_gpu_robot.launch
- Time ~ 2.9s
-
For running PERCH CPU Version, launch the node with the CPU launch file :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; roslaunch object_recognition_node roman_object_recognition_robot.launch
- Time ~ 32s
-
Play either of the below rosbag files :
rosbag play crate_x_1_y_01_90_2019-06-07-11-53-17.bag rosbag play crate_x_1_y_01_105_2019-06-08-14-05-07.bag
-
Publish the labels of requested objects on the topic :
rostopic pub /requested_object std_msgs/String "data: 'crate_test'"
-
You can check the output in the
<local path to perception repo>/sbpl_perception/visualization
folder and on RVIZ (use config file in this path -<local path to perception repo>/object_recognition_node/rviz/realsense_camera_robot.rviz
) -
The input topic specs such as topics for point clouds etc. are present in the launch file. Other config files with parameters :
# Contains parameters settings related to PERCH 2.0 code (GPU) # Note that use_downsampling should be true so that input cloud can be downsampled # For roman, ICP type 3 (fast_gicp) is better to use <local path to perception repo>/sbpl_perception/config/roman_gpu_robot_env_config.yaml # Contains parameters settings related to PERCH code (CPU) <local path to perception repo>/sbpl_perception/config/roman_env_config.yaml # Contains the list of objects in the bank and path to their models <local path to perception repo>/sbpl_perception/config/roman_objects.xml # Contains the Roman Realsense camera intrinsic parameters <local path to perception repo>/sbpl_perception/config/roman_camera_config.yaml
Note : This is configured only for GPU since the CPU version would be too slow to work in this way
-
Download the bagfiles from this link and put it in a local folder. You can download one bag file or all.
-
Running this bagfile requires YCB Video object models. If you don't have the YCB Video Dataset already downloaded :
- Create a folder YCB_Video_Dataset in your local dataset folder.
- Download the YCB Video object models from this link the place the downloaded models folder into the YCB_Video_Dataset follder.
-
Create a visualization directory
<local path to perception repo>/sbpl_perception/visualization
to store the outputs. -
Run the Docker image and build the code.
-
Launch the PERCH 2.0 ROS node that listens for the topics from the rosbag. This node will run in continuous tracking mode. :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; roslaunch object_recognition_node pr2_conveyor_moving_object_recognition.launch
-
Publish the labels of requested objects on the topic :
# For GPU rostopic pub /requested_object std_msgs/String "data: '035_power_drill'"
-
Start playing the bag file. In continuous tracking mode, each received point cloud message will be processed and pose will be estimated :
rosbag play drill/drill_1_2020-01-20-13-49-23.bag
-
You can check the output in the
<local path to perception repo>/sbpl_perception/visualization
folder and on RVIZ (use config file in this path -<local path to perception repo>/object_recognition_node/rviz/pr2_conveyor.rviz
) -
The input topic specs such as topics for point clouds etc. are present in the launch file. Other config files with parameters :
# Contains parameters settings related to PERCH 2.0 code (GPU) # Note that use_downsampling should be true so that input cloud can be downsampled <local path to perception repo>/sbpl_perception/config/pr2_gpu_robot_conv_env_config.yaml # Contains the list of objects in the bank and path to their models <local path to perception repo>/sbpl_perception/config/ycb_objects.xml # Contains the PR2 Kinect camera intrinsic parameters <local path to perception repo>/sbpl_perception/config/camera_config.yaml
- Set variables to allow communication with robot :
export ROS_MASTER_URI=http://192.168.11.123:11311
export ROS_IP='192.168.11.23'
- Run the Docker image and build the code.
- Run the code :
source /ros_python3_ws/devel/setup.bash
Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
roslaunch object_recognition_node pr2_conveyor_object_recognition.launch
- You can copy the launch file
pr2_conveyor_object_recognition.launch
and associated config files and modify it according to your robot.