This tutorial illustrates how to program with RIDE, as described in the tutorial paper in Section III-B. It consists of the implementation of the plug_in
and plug_in_vision
state machines, their corresponding Apps and services.
Make sure that RIDE and RaceCom is installed in your computer, you are connected to the Arm base or to Control and logged into the Core:
ride login <robot-ip>
Create your build folder by running the following command on the root of this repository (not in the 2_RIDE
folder!):
mkdir build && cd build
Run cmake with the following flags in order to build the RIDE tutorials
cmake -DBUILD_RIDE=ON -DBUILD_FCI=OFF -DBUILD_LEARNING=OFF ..
and finally compile the RIDE tutorial with
make
To upload the state machines and Apps to the robot run
make ride_tutorial_statemachines
After installing the bundle the Apps should appear in the app pane. In addition to that, the bundle should appear listed after entering the following command
ride bundle list
Furthermore the individual state machines and Apps should be listed when you enter
ride node list
The Plug in App is located in the ride-tutorial
bundle in the statemachines
directory. Every bundle consists of a manifest.json
defining the bundle, a resource folder that stores all the icons for the context menu and a source folder, containing the state machines and App files.
The Plug in App of this tutorial is implemented in three different state machine files that can also be seen as layers:
-
The App layer
plug_in_app.lf
:
This state machine implements the context menu where the user can set required parameters in Desk like the socket and hole pose. It is also the 'main' execution file: after getting the context menu parameters, it runs theplug_in_wrapper
state machine which first converts the collected parameters to state machine parameters and then performs the plug insertion. -
The wrapper layer
plug_in_wrapper.lf
:
This statemachine receives the high-level parameters from the context menus and converts them to the actual parameters required for plug in and then runs theplug_in
state machine. -
The execution layer
plug_in.lf
:
This state machine implements the execution of the plug in task given by the following state flow:-
move_to_socket_pose
- a cartesian motion to the socket_pose with high stiffness for high accuracy and sensitivity -
set_move_configuration
- set collision thresholds and stiffness for the insertion -
insert
- a barrier executing the following states in parallel:wiggle
- execute a wiggle motion in the (x,y) planepress
- apply and regulate a constant force along the z-axismonitor_pose
- monitor the current end-effector pose and check if the hole pose has been reached within a certain tolerance
This implementation assumes that a plug in is successful when the robot reaches the hole pose within a certain tolerance and maximum time.
The advantage of such a multi-layer implementation is on the one hand the reusability of individual state machines (e.g.
plug_in
andplug_in_wrapper
) and the clarity (implementation of context menu, parameter conversion and execution are split in three different files). -
If you installed successfully the ride_tutorial
the Plug in
App will appear in the App pane of Desk.
To run it follow the subsequent instructions:
- Create a new Task
- Program the task: Drag the
Plug in
App into the Timeline of the Task - Parameterize the task: Click on the
Plug in
App and follow the instructions to teach the robot. Note that the context menu that appear in this step are defined in theplug_in_app.lf
file.
Note: The expert parameters are preconfigured and don't need to be changed. However, you can play around with them and see how the behavior of the robot changes. - Optional: Add a
Cartesian Motion
App before thePlug in
App to allow for the experiment to be repeated. Teach theCartesian Motion
in a way that it unplugs the plug. - Activate the external activation device and click on the run button to start the task. This will make the robot move!
You can trace the execution in a linux terminal with
ride execution trace
and check the logs with
ride cli log
The Plug in App described above is capable of performing an insertion for a socket and hole pose taught with Desk. Let us assume now that a computer vision module is estimating the socket pose and we would like to use it in our Plug in
state machine. To realize this we need: (1) a state machine executing the plug in and (2) a service providing the socket pose.
The following figure illustrates the interplay of interfaces in the system architecture and the involved bundles.
-
Socket-pose service
Thesocket-pose
service is located inservices/src/socket_service.cpp
. It implements thegetSocketPose
operation that returns the socket pose and periodically publishes thesocket-info
event with the number of sockets detected. The operation type is defined in theGetSocketPose.op
file inservices/msg
directory. It specifies an empty operation call that returns a 16-dimensional float array in case of success and a string in case of an error. Similarly, the event type is defined in theSocketInfo.ev
file which specifies a message given by an integer.
Note: The socket-pose and number of sockets are hardcoded. Please adapt this to your setup. -
State machine
The state machine is implemented in theplug_in_vision.lf
file located in theride_tutorial
bundle. It consists of the following state flow:check_for_sockets
- subscribes to the eventsocket-info
and checks if any sockets are detected.get_socket_pose
- calls the operationgetSocketPose
which returns the pose.compute_hole_pose
- computeshole_pose
fromsocket_pose
plug_in
- converts parameters and inserts the plug into the detected socket by callingplug_in_wrapper
from above
Note that this example focuses on how to connect an external service with a state machine. All hypothetical vision related results such as the socket pose are hardcoded.
Before running the demo you probably want to adapt the hardcoded pose of the socket-pose
service to your setup. To do so modify the hardcoded socket_pose
in services/src/socket_service.cpp
(l. 22) and compile the service again as described in the installation instructions. To adapt the socket pose to your setup, guide the robot to the desired pose and run the following command in a terminal
ride cli echo robot sensor_data
which echoes all robot sensor data. Copy the output of the O_T_EE
(end effector pose) and paste it in the services/src/socket_service.cpp
file (l. 22). After recompiling as described in the installation instructions the service should be providing the new hardcoded pose.
To run the App do the following:
-
Run the service:
Open a terminal, go to the root of the repository and run./build/2_RIDE/services/socket_service <robot-ip> 11511 <network-interface>
with the following arguments:
<robot-ip>
: is IP address of the robot (robot.franka.de
if you are connected to the base or your preconfigured robot ip otherwise).<network-interface>
: is the network interface used by your computer to reach the robot. You can check your network interfaces with theip a
command. If you are using an ethernet adapter, it will be namedenpXXXX
. Wireless adapters are denoted aswlpXXX
. Ask your system administrator if you can't figure out your network interface.
After running the service, it should appear listed when you enter the following command
ride cli services
-
Run the
plug_in_vision
state machine:
To run theplug_in_vision
state machine first activate the external activation device, stop any execution currently running (e.g. idle state machine) byride cli stop
and then
ride cli start plug_in_vision
This will make the robot move!