-
Notifications
You must be signed in to change notification settings - Fork 2
Part 3
- Exercise 1: Launching an Action Server and calling it from the command line
- Exercise 2: Working with KUKA LBR iiwa in simulation
If it isn't currently running then launch your WSL-ROS environment using the WSL-ROS shortcut in the Windows Start Menu. Once ready this will open up the Windows Terminal and an Ubuntu terminal instance (which we'll refer to as TERMINAL 1).
If you happen to have changed to a different university machine since Part 2 then you may wish to restore the work that you did in the previous sessions. Hopefully you ran rosbackup.sh
to backup all your work before, so you should now be able to restore this by running the following command in TERMINAL 1:
[TERMINAL 1] $ rosrestore.sh
So far we have used the ROS Publisher-Subscriber communication method to pass information between nodes on a ROS network using topics and messages. This is very flexible: any node can publish messages to any topic on a ROS Network and any other node on the network can then subscribe to this topic to receive the information. Any number of nodes can publish or subscribe to the same topic at the same time.
There is another method of communication that we can use in ROS however, which works slightly differently (in fact there are two, but we'll only worry about one of them during this training). This is a type of communication that is typically used to invoke a particular discrete behaviour or perform a certain pre-defined task, and is therefore called a ROS Action. ROS Actions are based on a Server-Client model: One node provides the ability to perform the action (i.e. the Server) and then other nodes can request that this action is performed whenever required (i.e. Clients).
This communication method can be summarised as follows:
A client can ask for a certain robotic action to be performed by publishing a Goal to the server (via the same ROS topics and messages that you are already familiar with). On receipt of this, the server will perform the action, providing Feedback data to the client as it does so. Based on this feedback, the client can choose to abort the action at any time by publishing a Cancel message back to the server. If all is good however, and the server completes the action, then it will publish a Result to the client once it has completed the task successfully (show-off!). Throughout all this, the Action Framework (i.e. the processes that allow this Action mechanism to function) also broadcasts a generic Status signal to the ROS network as a more generalised indication of action progress (to echo the Feedback and Result, but with no real specifics).
Here's a fairly quick exercise to illustrate a fairly simple Action in action!
Here we'll use an action server to make the TurtleBot3 Waffle robot scan the environment and take pictures for us!
-
First, you'll need to quickly pull down an update to one of the packages that are installed on the WSL-ROS system:
-
In TERMINAL 1, navigate to the
com2009_actions
package directory:[TERMINAL 1] $ cd ~/catkin_ws/src/com2009/com2009_actions
-
Then pull down the updates from GitHub:
[TERMINAL 1] $ git pull
Sorted!
-
-
Now, use the following launch file to launch the robot into a simulated world and also launch an action server at the same time:
[TERMINAL 1] $ roslaunch com2009_actions turtlebot3.launch
-
Next, open up a new Windows Terminal instance from the Windows Start Menu (as you did earlier). We'll call this one WT(B):
-
In WT(B) have a look at all the topics that are currently active on the ROS network (you should know exactly how to do this by now!)
You should notice 5 items in that list with the
/camera_sweep_action_server
prefix:/camera_sweep_action_server/cancel /camera_sweep_action_server/feedback /camera_sweep_action_server/goal /camera_sweep_action_server/result /camera_sweep_action_server/status
Do these look familiar from the figure above?!
-
ROS Actions use topic messages, so we can tap into the ROS network and observe the messages being published to these in exactly the same way as we have done in Parts 1 and 2 using
rostopic echo
. In order to monitor some of these messages now, we'll launch a couple more separate terminal instances, so that we can view a few things simultaneously:-
Launch an additional terminal instance from the Windows Start Menu again. This one will be WT(C).
-
Do this again to launch yet another terminal instance which we'll call WT(D).
-
You should now have four Windows Terminal applications open! Arrange these so that they are all visible:
-
-
In WT(C) run a
rostopic echo
command to echo the messages being published to the/camera_sweep_action_server/feedback
topic:[WT(C)] $ rostopic echo /camera_sweep_action_server/feedback
To begin with, you will see the message:
WARNING: no messages received and simulated time is active. Is /clock being published?
Don't worry about this.
-
Do the same in WT(D), but this time to echo the messages being published to the
/result
part of the action server message. -
Now, going back to WT(B), run the
rostopic pub
command on the/camera_sweep_action_server/goal
topic, using the autocomplete functionality of the Linux command line to help you format the message correctly by entering:[WT(B)] rostopic pub /camera_sweep_action_server/goal[SPACE][TAB][TAB]
This should provide you with the following:
$ rostopic pub /camera_sweep_action_server/goal com2009_actions/CameraSweepActionGoal "header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: '' goal_id: stamp: secs: 0 nsecs: 0 id: '' goal: sweep_angle: 0.0 image_count: 0"
-
Edit the
Goal
portion of the message by modifying thesweep_angle
andimage_count
parameters:- sweep_angle is the angle (in degrees) that the robot will rotate on the spot
- image_count is the number of images it will capture from its front-facing camera while it is rotating
-
Once you have decided on some values, hit
Enter
to actually publish the message and call the action server. Keep an eye on all four terminal instances. What do you notice happening in each of them? -
Now, in WT(B):
-
Cancel the
rostopic pub
command by enteringCtrl+C
-
Navigate to the directory that the images have just been created in:
[WT(B)] $ cd ~/myrosdata/actions
-
Have a look at the contents of the directory using
ll
(a handy alias for thels
command with a few extra arguments):[WT(B)] $ ll
You should see the same number of image files in there as you requested with the
image_count
parameter. -
Launch
eog
in this directory and click through all the images to review the snaps that the robot has taken:[WT(B)] $ eog .
-
-
To finish off, close down all the active processes:
- Close down the
eog
window and close the WT(B) terminal instance. - Stop the
rostopic echo
commands that are running in WT(C) and WT(D) by enteringCtrl+C
in each of them and then close each of these terminal instances too. - Then, enter
Ctrl+C
in TERMINAL 1 to stop the Gazebo and Action Server processes. Leave TERMINAL 1 open.
- Close down the
ROS Actions are great for making a robot perform a pre-defined task that may take a bit of time to complete. This may be quite a complex task that has the potential to fail, or might need to be safely terminated in the event of other external factors or events taking place. The Action Framework makes this possible through the ability to cancel (or preempt) the behaviour that has been requested. This method is therefore perfect for applications such as robot arms, where we might invoke an action to make an arm move to a certain place, pick up a certain item or adopt a particular pose, but where we can be confident that we can easily stop the behaviour mid-way through, if required.
You'll therefore be interacting with ROS Actions during the challenge over the next two days as you work with the KUKA LBR iiwa Robot Arm. You won't necessarily need to know all the specifics that we have talked about here about how ROS Actions work in order to bring the robot arm to life, but it's worth appreciating what's going on here.
To start working with the second robot in this tutorial, the KUKA LBR iiwa Robot Arm, we will have to switch to a different set of packages.
Assuming you have TERMINAL 1 open (if not, just open a new one), navigate to directory called iiwa_stack_ws
in the home folder:
[TERMINAL 1] $ cd ~/iiwa_stack_ws
This is a dedicated workspace for the so-called iiwa stack, a collection of packages developed for programming the iiwa robot in ROS, both in the real world and in the simulator.
The stack is based on MoveIt!, an industry standard ROS library for programming industrial robots, specifically manipulators. It covers various areas, including motion planning, manipulation, 3D perception, kinematics, control and navigation.
Have a look at the following diagram for a bird's eye overview of how MoveIt! works (adapted from the official docs).
MoveIt! System Architecture
They key component here is the Move Group node. The term group here refers to a collection of joints and links that form a part of the robot's body.
Typically, two common groups are identified when it comes to robot arms: the manipulator
and end_effector
groups.
These can be linked to a move_base
node if the robot is attached to a mobile base/moving platform, however in our case we assume the robot is static.
The Move Group node serves as an integrator: pulling all the individual components together to provide a set of ROS actions and services for users to use.
Move Group Architecture
As can been seen in the figure, MoveIt! can be configured through the ROS parameter server to a particular robot by using its URDF and SRDF models, as well other configuration information, such as joint limits, kinematics, motion planning, perception, etc.
There are at least three established ways a user interact can interact with the move_group
: through the native C++ move_group_interface
; through the Python wrapper package moveit_commander
, or through the Motion Planning Panel in Rviz GUI.
Throughout the rest of this tutorial we will be using the moveit_commander
package, as it strikes a balance between flexibility and user-friendliness.
Among all the various things that the move_group
does the three most important and relevant to the end user are MoveGroupAction, PickAction and PlaceAction.
These are ROS Action Servers that handle the incoming requests to plan and execute movements, corresponding to the movement type.
The rest of the tutorial will focus on the MoveGroupAction library. Pick and Place are specific to the end effector, and so these topics will be covered in Days 2 and 3. The exercises that follow here will help you to get to grips with how to work with the KUKA LBR iiwa Robot Arm and interact with the APIs that you will work with during the challenge to bring this to life (safely!).
For the purposes of this tutorial we have decided to cover three main methods of how one might control a robot arm with MoveIt! through the move_group
, namely:
- Direct control by specifying a joint pose.
- Specifying a target pose in space, to which the IK-solver will produce a trajectory from the current pose.
- Planning a path with a set of waypoints in Cartesian coordinates for the end-effector to go through, subject to given parameters and constraints.
A few demos from the official Python Move Group docs were adapted for our iiwa (14kg model) and bundled together with a .launch
file to launch a model of "iiwa14" in Gazebo.
LBR iiwa14 main specs
The resulting MRC_2021
package is available in the Code
section of this repo, which you can clone and build using the following commands:
[TERMINAL 1] $ cd ~/iiwa_stack_ws/src
[TERMINAL 1] $ git clone https://github.com/tom-howard/MRC_2021
[TERMINAL 1] $ catkin build
The rest of the features is out of the scope of this guide and can be explored directly within the iiwa stack.
Once you've compiled the tutorial package, open a new terminal instance and close the old ones, to make sure all the latest changes have applied and ROS can find your new package.
Run the iiwa_moveit.launch
file to bring up an iiwa model simulated in Gazebo (you can use TAB sparingly to save yourself some typing)
[new TERMINAL 1] $ roslaunch MRC_2021 iiwa_moveit.launch
The loading progress is shown in the terminal output as different elements are being launched.
You might see a few warnings and an error about the p
gain — you can safely ignore that.
The phrase 'You can start planning now!' indicates that core libraries have been successfully loaded. In addition, 2 new windows should open: a Gazebo client with a model of the iiwa robot in an empty world, and an Rviz window showing Motion Planning interface panel, also with a robot model. Rearrange the windows in such a way that you can see the robot model in both windows, as well as your terminals.
You can open a new terminal tab and have a look at all the new ROS nodes, topics, services and parameters that have been created:
[TERMINAL 2] $ rosnode list
[TERMINAL 2] $ rostopic list
[TERMINAL 2] $ rosparam list
[TERMINAL 2] $ rosservice list
All that you have learned from the Parts 1 and 2 about nodes and topics can be applied here.
But don't worry if you start to feel a bit overwhelmed; we won't be needing to use much of these controls directly anyway!
First of all, correct permissions need to be set in order to run any Python scripts from the package (remember Part 1?). In a new terminal tab, run the following command:
[TERMINAL 3] $ roscd MRC_2021/src/
[TERMINAL 3] $ chmod +x *
Note: Star symbol (*) means 'select everything in that folder'
Now, let's see a demo of the Move Group in action. Open a new terminal and run the following command:
[TERMINAL 3] $ rosrun MRC_2021 iiwa_move_group_demo.py
The demo is divided into three parts, and requires you to hit Enter
in order to proceed to the next part.
You should see a series of movements performed by the move_group
node, using three different methods:
- Joint state control
- Target pose control
- Cartesian path planning
All of these are arranged as calls to the move_group
action server and so the techniques you've learned in exercise 1 can be used here as well.
For example, one can monitor the progress of a task execution by asking the server to provide feedback:
[TERMINAL 2] $ rostopic echo /iiwa/move_group/feedback
Note: Gazebo always shows the actual position of the arm; while in Rviz, rather counterintuitively, the actual position is shown as the end point of a transparent trajectory.
You can untick the Rviz animation loop in MotionPlanning -> Planned Path panel
Looking at the terminal output can help clarify the various stages of the demo. Occasionally, you might see a message with the following text:
[INFO] ABORTED: Solution found, but controller failed during execution
Presumably, this happens when the tolerance level for deviation from a planned trajectory is exceeded. In the demo, checks are made for the final arm state after each movement; and so, as long as the final position is reached within acceptable tolerance range, these messages can be safely ignored.
Have a look at the code in this demo explained in more detail here.
This completes the taught part of the tutorial!
If you have time left, here are some ideas to try:- Using the feedback from action server to control the execution. For example, make a check that would cancel a movement if it takes more than 3 seconds to complete.
- Seei if you can do the same things you saw in the demo within the Rviz GUI (Motion Planning Panel).
- Write a Python program that would use Cartesian trajectory planning to draw an '8' with the end-effector!
Once again, save the work you have done here by running the following script in any idle WSL-ROS Terminal Instance:
$ rosbackup.sh
That's it for the ROS Training Course! Check out our 2nd Year Computer Science Practical ROS Course for everything that you have done here and more!
Finally:
Good luck in the Manufacturing Robotics Challenge over the next two days, and have fun!
Navigating This Wiki:
← Part 2: Sensors and Control
ROS Training
UK-RAS Manufacturing Robotics Challenge 2021
Tom Howard & Alex Lucas | The University of Sheffield