-
Notifications
You must be signed in to change notification settings - Fork 0
Real Robot Remote Deployment
On this page we look at how to introduce the Robot Laptop into the ROS Network. This is useful if we want to leverage visualisation tools (for instance) that we can't run on the robot (because it doesn't have a display!) The laptops are more powerful too, so we may want to perform more computationally intensive tasks here, to make things work more efficiently (and save our robot some work).
- Prerequisites
- Configuring the Laptop
- NEW: Synchronising System Clocks
- Remote Deployment Exercises:
- Wrapping Up
- First, make sure that the laptop is connected to the DIA-LAB WiFi network.
- Remember that the ROS Master runs on the robot, and we have to have certain core packages running on the robot too, in order for it to actually be able to do anything! Before you start then, you need to launch ROS on the robot (if you haven't done so already).
There are three important things you need to do in order to make sure that the laptop is configured correctly and knows which robot it should be connecting to...
Our robot laptops are also setup to work with our friendly MiRo robots, and so there are two different modes that the laptops can operate in: 'MiRo Mode' and 'Waffle Mode'. Clearly, we want to use the latter, so make sure the laptop is in Waffle Mode by opening up an Ubuntu Terminal Instance on the laptop (using the Ctrl+Alt+T
keyboard shortcut) and running the follow command:
$ robot_switch tb3
The following message indicates that the laptop has successfully been switched into Waffle Mode (if it wasn't already):
The robot laptops can also be used to work in simulation too (but you've probably had enough of that by now, haven't you?!). To work with the physical robots, make sure that you are in 'Real Robot Mode' by running the following command:
$ robot_mode robot
... which should present you with the following message:
Switching into 'Real Robot Mode' (run 'robot_mode sim' to work in simulation instead).
Remember that the ROS Master runs on the robot itself, and the laptop therefore needs to know where to look, in order to join its ROS Network.
To tell the laptop which robot that you are working with, run the following command:
$ pair_with_waffle X
... replacing X
with the number of your robot.
You should then see the following message:
Pairing with robot: dia-waffleX...
Done. Re-source the .bashrc (by typing 'src') for the changes to take effect.
Do as instructed and re-source your environment now:
$ src
Once you've done all three of these things, it's a good idea to close down the terminal window. Don’t worry if you see a pop-up message saying that "there is a process still running
," just click the "Close Terminal
" button to proceed. Every new terminal window you open from now on should have the correct settings applied, so you're good to go!
For certain tasks (using the Navigation Stack, for instance) both the robot and laptop system clocks need to be synchronised, otherwise ROS may throw errors. To do this, run the following command on both the laptop and the robot.
Note: On the laptop you'll need to be connected to 'eduroam' for this to work.
$ sudo ntpdate ntp.ubuntu.com
You'll be asked to enter the password on each system, then you'll need to wait 10-15 seconds for the process to complete.
Here are some exercises for you to have a go at for a taste of what ROS things you can run 'remotely' on the laptop.
You can use some ROS tools that you will be familiar with from simulation in order to see the real world through the eyes of the robot!
-
You'll need to launch the camera packages on the robot first and for that, you'll need access to a new terminal instance on the robot. Much like you did previously, you can do this in one of two ways:
-
Option 1: return to the
tmux
instance that you created when launching ROS on the robot (ROBOT 1), and use theCtrl+B
,C
key combination to create a new tmux window. - Option 2: create a new robot terminal instance in VS Code using the Remote - SSH extension.
This new terminal instance shall herein be referred to as ROBOT 2.
-
Option 1: return to the
-
Launch the camera nodes on the robot:
[ROBOT 2] $ roslaunch realsense2_camera rs_camera.launch
Pro Tip: We've got an alias for that too, you know!
You may see some warnings when you execute this launch command, but - in most circumstances - it's OK to ignore them, so just carry on for now.
This essentially grabs images from the RealSense Camera, creates some new topics on the ROS network and publishes the image data to these topics.
-
Leave this terminal instance running in the background now and open up a new terminal instance on the laptop (let's call that REMOTE 2). In here, enter the following command:
[REMOTE 2] $ roslaunch turtlebot3_bringup turtlebot3_remote.launch
(or use a handy alias again!)
-
Remember from Week 3 that RViz is a ROS tool that allows us to visualise the data being measured by a robot in real-time. We can launch this on the laptop. To do so, create another terminal instance (REMOTE 3) and launch RViz with the following command (also available as an alias):
[REMOTE 3] $ rosrun rviz rviz -d `rospack find turtlebot3_description`/rviz/model.rviz
-
As we did in Week 6, we can also use
rqt_image_view
to view live image data being streamed to ROS image topics. You could open another terminal instance on the laptop to launch this (REMOTE 4):[REMOTE 4] $ rqt_image_view
Select
/camera/color/image_raw
in the dropdown topic list to see the images being obtained and published by the robot's camera!
Use these tools to keep an eye on your robot's environment whilst performing the next exercise...
In simulation, you used the turtlebot3_teleop
package to drive your robot around in an empty world. This works in exactly the same way with a real robot in a real world!
-
Open yet another new terminal instance (REMOTE 5) and enter exactly the same
roslaunch
command as you used in simulation to launch theturtlebot3_teleop
node:[REMOTE 5] $ roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch
-
Drive your robot around using the laptop keyboard (as you did in simulation) taking care to avoid any obstacles as you do!
-
Once you've spent a bit of time on this, close the teleop node down by entering
Ctrl+C
in REMOTE 5. -
Close down RViz and the
rqt_image_view
nodes running in REMOTE 3 and REMOTE 4 as well, we won't need these for the next exercise. -
Back in REMOTE 2 the
turtlebot3_remote
bringup should still be running. You can close this down as well now.
Previously you created a ROS package on the robot's filesystem, and you should do the same on the laptop now (or if your package is a GitHub repo, perhaps you could clone it to the laptop instead?)
Ether way, a Catkin Workspace exists on the laptop's filesystem here:
/home/student/catkin_ws/
... and you should create packages in its src
directory...
-
In REMOTE 2 navigate to the Catkin Workspace
src
directory on the laptop:[REMOTE 2] $ cd ~/catkin_ws/src/
-
Either
git clone
your existing package into this, or create a new one using thecatkin_create_pkg
tool. -
catkin build
is installed on the laptop, so you can go ahead and run this as you would in WSL-ROS:[REMOTE 2] $ catkin build {your package name}
-
Then, re-source your environment:
[REMOTE 2] $ src
Remember how we used SLAM in Week 3 to create a map of a simulated environment? We'll do this now on a real robot in a real environment!
-
In REMOTE 3 enter the following command to launch all the necessary SLAM nodes on the laptop:
[REMOTE 3] $ roslaunch turtlebot3_slam turtlebot3_slam.launch
(also available as an alias, again)!
This will launch RViz again, where you should now be able to see a model of the TurtleBot3 from a top-down view surrounded by green dots representing the real-time LiDAR data. The SLAM tools will already have begun processing this data to start building a map of the boundaries that are currently visible to your robot based on its position in the environment.
Note: To begin with your robot may just appear as a white shadow (similar to the left-hand image). It may take some time for the robot to render correctly (like the right-hand image) as the SLAM processes and data communications catch up with one another. This can sometimes take up to a minute or so, so please be patient! If - after a minute - nothing has happened, then speak to a member of the teaching team.
-
Head back to REMOTE 5, and launch the
turtlebot3_teleop
node again. Start to drive the robot around slowly and carefully to build up a complete map of the area.Note: It's best to do this slowly and perform multiple circuits of the whole area to build up a more accurate map.
-
Once you are happy that your robot has built up a good map of its environment, you can save this map using the
map_server
package (again, in exactly the same way as you did in Week 3):-
First, create a new directory within your
{your package name}
package on the laptop. We'll use this to save maps in. You should still be in your package directory in REMOTE 2, so head back to that one:-
There's no harm in running this, just to make sure that you are in the right place to start with:
[REMOTE 2] $ roscd {your package name}
-
Create a directory in here called
maps
:[REMOTE 2] $ mkdir maps
-
Navigate into this directory:
[REMOTE 2] $ cd maps/
-
-
Then, use
rosrun
to run themap_saver
node from themap_server
package to save a copy of your map:[REMOTE 2] $ rosrun map_server map_saver -f {map name}
Replacing
{map name}
with an appropriate name for your map. This will create two files: a{map name}.pgm
and a{map name}.yaml
file, both of which contain data related to the map that you have just created. -
The
.pgm
file can be opened ineog
on the laptop:[REMOTE 2] $ eog {map name}.pgm
-
-
Return to REMOTE 3 and close down SLAM by pressing
Ctrl+C
. The process should stop and RViz should close down. Close down theteleop
node in REMOTE 5 if that's still going too.
In Week 6 we developed some ROS nodes to analyse images from our simulated robot's camera, and we then enhanced this to allow our robot to detect coloured pillars in its simulated environment. Why not try doing a similar thing here and this time see if you can get your robot to detect some coloured pillars that we have here in the lab instead?!
There are a few things that you'll need to do (and a few things to be aware of) before you get started on this exercise:
-
First, you'll need to make sure that the camera nodes are running. You launched these in the ROBOT 2 terminal instance at the start of Exercise 1.
-
Develop your ROS nodes inside the
src
directory of the package that you created on the laptop in Exercise 3. -
Use the Object Detection Template from Week 6 as a guide to help you. You'll need to modify this a bit for the real robot/laptop setup though:
-
The real robot publishes its camera images to a topic with a slightly different name to that used in simulation. Use
rostopic list
to identify the correct camera image topic on the real robot, and adapt therospy.Subscriber()
in the Object Detection Node accordingly. -
Change the code to save images to an
images
directory inside your package, rather than the~/myrosdata/week6_images/
folder that the template uses by default. -
Determine the native dimensions of the images obtained by the real robot camera. The images are smaller than the ones we obtained in simulation, so you might want to adjust your crop dimensions accordingly.
-
-
Obtain some images and analyse them using the
image_colours.py
node from thecom2009_examples
package. This package is installed on the laptop, so you can execute it in exactly the same way as in Week 6, usingrosrun
. -
Try to define some image masks so that your robot can detect one (or more) of the coloured pillars in the robot arena.
-
Copy across the
colour_search.py
node from thecom2009_examples
package into your own packagesrc
directory and see if you can get this working so that it makes the robot stop turning once it is facing a coloured pillar in the robot arena!
When you're finished working with a robot, remember that it needs to be shut down properly.
-
First, close down any active processes that are running on the robot by checking through any active ROBOT terminals and stopping these processes using
Ctrl+C
. -
Then, shut down the robot by entering the following command in ROBOT 1:
[ROBOT 1] $ off
Enter the password when asked then wait for the
"Connection to dia-waffleX closed"
message.
Real Robot Lab Instructions:
← 'Local' Deployment [Previous] |
[Next] Tips & Tricks →
COM2009/3009 Robotics Lab Course
Updated for the 2021-22 Academic Year
Dr Tom Howard | Multidisciplinary Engineering Education (MEE) | The University of Sheffield
The documentation within this Wiki is licensed under Creative Commons License CC BY-NC:
You are free to distribute, remix, adapt, and build upon this work (for non-commercial purposes only) as long as credit is given to the original author.