Skip to content

Running on Robot

Aditya Agarwal edited this page Sep 16, 2020 · 6 revisions

It is recommended that before running on the robot, you should test the setup using one of the bag file methods below

YCB 3-DOF PR2 bag file

  1. Download the bagfile from this link and put it in a local folder.
  2. Running this bagfile requires YCB Video object models. If you don't have the YCB Video Dataset already downloaded :
    • Create a folder YCB_Video_Dataset in your local dataset folder.
    • Download the YCB Video object models from this link the place the downloaded models folder into the YCB_Video_Dataset follder.
  3. Create a visualization directory <local path to perception repo>/sbpl_perception/visualization to store the outputs.
  4. Run the Docker image and build the code.
  5. Launch the PERCH 2.0 ROS node that listens for the topics from the rosbag :
    roscore& #skip this if roscore is running outside
    source /ros_python3_ws/devel/setup.bash
    Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
    roslaunch object_recognition_node pr2_conveyor_object_recognition.launch 
    
    • Time ~ 4s
  6. For running PERCH CPU Version, launch the node with the CPU launch file :
    roscore& #skip this if roscore is running outside
    source /ros_python3_ws/devel/setup.bash
    Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
    roslaunch object_recognition_node pr2_conveyor_object_recognition_cpu.launch 
    
    • Time ~ 75s
  7. Start playing the bag file :
    rosbag play 3dof_1_2020-03-04-14-17-00.bag
    
  8. Publish the labels of requested objects on the topic :
    # For GPU
    rostopic pub /requested_object std_msgs/String "data: '004_sugar_box 005_tomato_soup_can 002_master_chef_can 006_mustard_bottle 010_potted_meat_can'"
    
    # For CPU (test with one object first)
    rostopic pub /requested_object std_msgs/String "data: '004_sugar_box'"
    
  9. You can check the output in the <local path to perception repo>/sbpl_perception/visualization folder and on RVIZ (use config file in this path - <local path to perception repo>/object_recognition_node/rviz/pr2_conveyor.rviz)
  10. The input topic specs such as topics for point clouds etc. are present in the launch file. Other config files with parameters :
    # Contains parameters settings related to PERCH 2.0 code (GPU)
    # Note that use_downsampling should be true so that input cloud can be downsampled
    <local path to perception repo>/sbpl_perception/config/pr2_gpu_robot_conv_env_config.yaml
    
    # Contains parameters settings related to PERCH code (CPU)
    <local path to perception repo>/sbpl_perception/config/pr2_conv_env_config_cpu.yaml
    
    # Contains the list of objects in the bank and path to their models
    <local path to perception repo>/sbpl_perception/config/ycb_objects.xml
    
    # Contains the PR2 Kinect camera intrinsic parameters
    <local path to perception repo>/sbpl_perception/config/camera_config.yaml
    

YCB 3-DOF roman bag file

  1. Download the bagfiles from here and here.

  2. This bag file uses the 3D model of a crate which can be downloaded by downloading the SameShape dataset from this link. Place it in your local datasets folder after download.

  3. Create a visualization directory <local path to perception repo>/sbpl_perception/visualization to store the outputs.

  4. Run the Docker image and build the code.

  5. Launch the PERCH 2.0 ROS node that listens for the topics from the rosbag :

    roscore& #skip this if roscore is running outside
    source /ros_python3_ws/devel/setup.bash
    Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
    roslaunch object_recognition_node roman_object_recognition_gpu_robot.launch
    
    • Time ~ 2.9s
  6. For running PERCH CPU Version, launch the node with the CPU launch file :

    roscore& #skip this if roscore is running outside
    source /ros_python3_ws/devel/setup.bash
    Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
    roslaunch object_recognition_node roman_object_recognition_robot.launch 
    
    • Time ~ 32s
  7. Play either of the below rosbag files :

    rosbag play crate_x_1_y_01_90_2019-06-07-11-53-17.bag
    rosbag play crate_x_1_y_01_105_2019-06-08-14-05-07.bag
    
  8. Publish the labels of requested objects on the topic :

    rostopic pub /requested_object std_msgs/String "data: 'crate_test'"
    
  9. You can check the output in the <local path to perception repo>/sbpl_perception/visualization folder and on RVIZ (use config file in this path - <local path to perception repo>/object_recognition_node/rviz/realsense_camera_robot.rviz)

  10. The input topic specs such as topics for point clouds etc. are present in the launch file. Other config files with parameters :

    # Contains parameters settings related to PERCH 2.0 code (GPU)
    # Note that use_downsampling should be true so that input cloud can be downsampled
    # For roman, ICP type 3 (fast_gicp) is better to use
    <local path to perception repo>/sbpl_perception/config/roman_gpu_robot_env_config.yaml
    
    # Contains parameters settings related to PERCH code (CPU)
    <local path to perception repo>/sbpl_perception/config/roman_env_config.yaml
    
    # Contains the list of objects in the bank and path to their models
    <local path to perception repo>/sbpl_perception/config/roman_objects.xml
    
    # Contains the Roman Realsense camera intrinsic parameters
    <local path to perception repo>/sbpl_perception/config/roman_camera_config.yaml
    

YCB 3-DOF conveyor tracking bag file

Note : This is configured only for GPU since the CPU version would be too slow to work in this way

  1. Download the bagfiles from this link and put it in a local folder. You can download one bag file or all.

  2. Running this bagfile requires YCB Video object models. If you don't have the YCB Video Dataset already downloaded :

    • Create a folder YCB_Video_Dataset in your local dataset folder.
    • Download the YCB Video object models from this link the place the downloaded models folder into the YCB_Video_Dataset follder.
  3. Create a visualization directory <local path to perception repo>/sbpl_perception/visualization to store the outputs.

  4. Run the Docker image and build the code.

  5. Launch the PERCH 2.0 ROS node that listens for the topics from the rosbag. This node will run in continuous tracking mode. :

    roscore& #skip this if roscore is running outside
    source /ros_python3_ws/devel/setup.bash
    Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
    roslaunch object_recognition_node pr2_conveyor_moving_object_recognition.launch
    
  6. Publish the labels of requested objects on the topic :

    # For GPU
    rostopic pub /requested_object std_msgs/String "data: '035_power_drill'"
    
  7. Start playing the bag file. In continuous tracking mode, each received point cloud message will be processed and pose will be estimated :

    rosbag play drill/drill_1_2020-01-20-13-49-23.bag
    
  8. You can check the output in the <local path to perception repo>/sbpl_perception/visualization folder and on RVIZ (use config file in this path - <local path to perception repo>/object_recognition_node/rviz/pr2_conveyor.rviz)

  9. The input topic specs such as topics for point clouds etc. are present in the launch file. Other config files with parameters :

    # Contains parameters settings related to PERCH 2.0 code (GPU)
    # Note that use_downsampling should be true so that input cloud can be downsampled
    <local path to perception repo>/sbpl_perception/config/pr2_gpu_robot_conv_env_config.yaml
    
    # Contains the list of objects in the bank and path to their models
    <local path to perception repo>/sbpl_perception/config/ycb_objects.xml
    
    # Contains the PR2 Kinect camera intrinsic parameters
    <local path to perception repo>/sbpl_perception/config/camera_config.yaml
    

Actual Robot

  1. Set variables to allow communication with robot :
export ROS_MASTER_URI=http://192.168.11.123:11311
export ROS_IP='192.168.11.23'
  1. Run the Docker image and build the code.
  2. Run the code :
source /ros_python3_ws/devel/setup.bash 
Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5;
roslaunch object_recognition_node pr2_conveyor_object_recognition.launch  
  1. You can copy the launch file pr2_conveyor_object_recognition.launch and associated config files and modify it according to your robot.