Skip to content

Real Robot Remote Deployment

Tom Howard edited this page May 12, 2022 · 5 revisions

'Remote' Deployment

On this page we look at how to introduce the Robot Laptop into the ROS Network. This is useful if we want to leverage visualisation tools (for instance) that we can't run on the robot (because it doesn't have a display!) The laptops are more powerful too, so we may want to perform more computationally intensive tasks here, to make things work more efficiently (and save our robot some work).

Quick Links

Prerequisites

  1. First, make sure that the laptop is connected to the DIA-LAB WiFi network.
  2. Remember that the ROS Master runs on the robot, and we have to have certain core packages running on the robot too, in order for it to actually be able to do anything! Before you start then, you need to launch ROS on the robot (if you haven't done so already).

Configuring the Laptop

There are three important things you need to do in order to make sure that the laptop is configured correctly and knows which robot it should be connecting to...

Thing 1: Enabling 'Waffle Mode'

Our robot laptops are also setup to work with our friendly MiRo robots, and so there are two different modes that the laptops can operate in: 'MiRo Mode' and 'Waffle Mode'. Clearly, we want to use the latter, so make sure the laptop is in Waffle Mode by opening up an Ubuntu Terminal Instance on the laptop (using the Ctrl+Alt+T keyboard shortcut) and running the follow command:

$ robot_switch tb3

The following message indicates that the laptop has successfully been switched into Waffle Mode (if it wasn't already):

Thing 2: Enabling 'Real Robot Mode'

The robot laptops can also be used to work in simulation too (but you've probably had enough of that by now, haven't you?!). To work with the physical robots, make sure that you are in 'Real Robot Mode' by running the following command:

$ robot_mode robot

... which should present you with the following message:

Switching into 'Real Robot Mode' (run 'robot_mode sim' to work in simulation instead).

Thing 3: Pairing the Laptop with a Robot

Remember that the ROS Master runs on the robot itself, and the laptop therefore needs to know where to look, in order to join its ROS Network.

To tell the laptop which robot that you are working with, run the following command:

$ pair_with_waffle X

... replacing X with the number of your robot.

You should then see the following message:

Pairing with robot: dia-waffleX...
Done. Re-source the .bashrc (by typing 'src') for the changes to take effect.

Do as instructed and re-source your environment now:

$ src

Once you've done all three of these things, it's a good idea to close down the terminal window. Don’t worry if you see a pop-up message saying that "there is a process still running," just click the "Close Terminal" button to proceed. Every new terminal window you open from now on should have the correct settings applied, so you're good to go!

Synchronising System Clocks

For certain tasks (using the Navigation Stack, for instance) both the robot and laptop system clocks need to be synchronised, otherwise ROS may throw errors. To do this, run the following command on both the laptop and the robot.

Note: On the laptop you'll need to be connected to 'eduroam' for this to work.

$ sudo ntpdate ntp.ubuntu.com

You'll be asked to enter the password on each system, then you'll need to wait 10-15 seconds for the process to complete.

Remote Deployment Exercises

Here are some exercises for you to have a go at for a taste of what ROS things you can run 'remotely' on the laptop.

Exercise 1: Observing your robot's environment

You can use some ROS tools that you will be familiar with from simulation in order to see the real world through the eyes of the robot!

  1. You'll need to launch the camera packages on the robot first and for that, you'll need access to a new terminal instance on the robot. Much like you did previously, you can do this in one of two ways:

    1. Option 1: return to the tmux instance that you created when launching ROS on the robot (ROBOT 1), and use the Ctrl+B, C key combination to create a new tmux window.
    2. Option 2: create a new robot terminal instance in VS Code using the Remote - SSH extension.

    This new terminal instance shall herein be referred to as ROBOT 2.

  2. Launch the camera nodes on the robot:

     [ROBOT 2] $ roslaunch realsense2_camera rs_camera.launch
    

    Pro Tip: We've got an alias for that too, you know!

    You may see some warnings when you execute this launch command, but - in most circumstances - it's OK to ignore them, so just carry on for now.

    This essentially grabs images from the RealSense Camera, creates some new topics on the ROS network and publishes the image data to these topics.

  3. Leave this terminal instance running in the background now and open up a new terminal instance on the laptop (let's call that REMOTE 2). In here, enter the following command:

     [REMOTE 2] $ roslaunch turtlebot3_bringup turtlebot3_remote.launch
    

    (or use a handy alias again!)

  4. Remember from Week 3 that RViz is a ROS tool that allows us to visualise the data being measured by a robot in real-time. We can launch this on the laptop. To do so, create another terminal instance (REMOTE 3) and launch RViz with the following command (also available as an alias):

     [REMOTE 3] $ rosrun rviz rviz -d `rospack find turtlebot3_description`/rviz/model.rviz
    

  5. As we did in Week 6, we can also use rqt_image_view to view live image data being streamed to ROS image topics. You could open another terminal instance on the laptop to launch this (REMOTE 4):

     [REMOTE 4] $ rqt_image_view
    

    Select /camera/color/image_raw in the dropdown topic list to see the images being obtained and published by the robot's camera!

Use these tools to keep an eye on your robot's environment whilst performing the next exercise...

Exercise 2: Driving your robot around using the laptop keyboard

In simulation, you used the turtlebot3_teleop package to drive your robot around in an empty world. This works in exactly the same way with a real robot in a real world!

  1. Open yet another new terminal instance (REMOTE 5) and enter exactly the same roslaunch command as you used in simulation to launch the turtlebot3_teleop node:

     [REMOTE 5] $ roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch
    
  2. Drive your robot around using the laptop keyboard (as you did in simulation) taking care to avoid any obstacles as you do!

  3. Once you've spent a bit of time on this, close the teleop node down by entering Ctrl+C in REMOTE 5.

  4. Close down RViz and the rqt_image_view nodes running in REMOTE 3 and REMOTE 4 as well, we won't need these for the next exercise.

  5. Back in REMOTE 2 the turtlebot3_remote bringup should still be running. You can close this down as well now.

Exercise 3: Creating a ROS package on the Laptop

Previously you created a ROS package on the robot's filesystem, and you should do the same on the laptop now (or if your package is a GitHub repo, perhaps you could clone it to the laptop instead?)

Ether way, a Catkin Workspace exists on the laptop's filesystem here:

/home/student/catkin_ws/

... and you should create packages in its src directory...

  1. In REMOTE 2 navigate to the Catkin Workspace src directory on the laptop:

     [REMOTE 2] $ cd ~/catkin_ws/src/
    
  2. Either git clone your existing package into this, or create a new one using the catkin_create_pkg tool.

  3. catkin build is installed on the laptop, so you can go ahead and run this as you would in WSL-ROS:

     [REMOTE 2] $ catkin build {your package name}
    
  4. Then, re-source your environment:

     [REMOTE 2] $ src
    

Exercise 4: Using SLAM to create a map of the environment

Remember how we used SLAM in Week 3 to create a map of a simulated environment? We'll do this now on a real robot in a real environment!

  1. In REMOTE 3 enter the following command to launch all the necessary SLAM nodes on the laptop:

     [REMOTE 3] $ roslaunch turtlebot3_slam turtlebot3_slam.launch
    

    (also available as an alias, again)!

    This will launch RViz again, where you should now be able to see a model of the TurtleBot3 from a top-down view surrounded by green dots representing the real-time LiDAR data. The SLAM tools will already have begun processing this data to start building a map of the boundaries that are currently visible to your robot based on its position in the environment.

    Note: To begin with your robot may just appear as a white shadow (similar to the left-hand image). It may take some time for the robot to render correctly (like the right-hand image) as the SLAM processes and data communications catch up with one another. This can sometimes take up to a minute or so, so please be patient! If - after a minute - nothing has happened, then speak to a member of the teaching team.

  2. Head back to REMOTE 5, and launch the turtlebot3_teleop node again. Start to drive the robot around slowly and carefully to build up a complete map of the area.

    Note: It's best to do this slowly and perform multiple circuits of the whole area to build up a more accurate map.

  3. Once you are happy that your robot has built up a good map of its environment, you can save this map using the map_server package (again, in exactly the same way as you did in Week 3):

    1. First, create a new directory within your {your package name} package on the laptop. We'll use this to save maps in. You should still be in your package directory in REMOTE 2, so head back to that one:

      1. There's no harm in running this, just to make sure that you are in the right place to start with:

         [REMOTE 2] $ roscd {your package name}
        
      2. Create a directory in here called maps:

         [REMOTE 2] $ mkdir maps
        
      3. Navigate into this directory:

         [REMOTE 2] $ cd maps/
        
    2. Then, use rosrun to run the map_saver node from the map_server package to save a copy of your map:

       [REMOTE 2] $ rosrun map_server map_saver -f {map name}
      

      Replacing {map name} with an appropriate name for your map. This will create two files: a {map name}.pgm and a {map name}.yaml file, both of which contain data related to the map that you have just created.

    3. The .pgm file can be opened in eog on the laptop:

       [REMOTE 2] $ eog {map name}.pgm
      
  4. Return to REMOTE 3 and close down SLAM by pressing Ctrl+C. The process should stop and RViz should close down. Close down the teleop node in REMOTE 5 if that's still going too.

Exercise 5: Object detection

In Week 6 we developed some ROS nodes to analyse images from our simulated robot's camera, and we then enhanced this to allow our robot to detect coloured pillars in its simulated environment. Why not try doing a similar thing here and this time see if you can get your robot to detect some coloured pillars that we have here in the lab instead?!

There are a few things that you'll need to do (and a few things to be aware of) before you get started on this exercise:

  1. First, you'll need to make sure that the camera nodes are running. You launched these in the ROBOT 2 terminal instance at the start of Exercise 1.

  2. Develop your ROS nodes inside the src directory of the package that you created on the laptop in Exercise 3.

  3. Use the Object Detection Template from Week 6 as a guide to help you. You'll need to modify this a bit for the real robot/laptop setup though:

    1. The real robot publishes its camera images to a topic with a slightly different name to that used in simulation. Use rostopic list to identify the correct camera image topic on the real robot, and adapt the rospy.Subscriber() in the Object Detection Node accordingly.

    2. Change the code to save images to an images directory inside your package, rather than the ~/myrosdata/week6_images/ folder that the template uses by default.

    3. Determine the native dimensions of the images obtained by the real robot camera. The images are smaller than the ones we obtained in simulation, so you might want to adjust your crop dimensions accordingly.

  4. Obtain some images and analyse them using the image_colours.py node from the com2009_examples package. This package is installed on the laptop, so you can execute it in exactly the same way as in Week 6, using rosrun.

  5. Try to define some image masks so that your robot can detect one (or more) of the coloured pillars in the robot arena.

  6. Copy across the colour_search.py node from the com2009_examples package into your own package src directory and see if you can get this working so that it makes the robot stop turning once it is facing a coloured pillar in the robot arena!

Wrapping Up

When you're finished working with a robot, remember that it needs to be shut down properly.

  1. First, close down any active processes that are running on the robot by checking through any active ROBOT terminals and stopping these processes using Ctrl+C.

  2. Then, shut down the robot by entering the following command in ROBOT 1:

     [ROBOT 1] $ off
    

    Enter the password when asked then wait for the "Connection to dia-waffleX closed" message.

Real Robot Lab Instructions:
← 'Local' Deployment [Previous] | [Next] Tips & Tricks →

Clone this wiki locally