TF buffer server / client implementation based on ROS services.
Implemented in C++ and Python bindings.
ROS noetic / melodic |
---|
This package allows to outsource TF lookups for nodes with medium/low frequency requirements. A central server node maintains a TF buffer and can handle requests from multiple clients. This saves resources as the clients don't need to have their own TF listener.
So far, that's nothing new - tf2_ros
already comes with tf2_ros::BufferClient
.
It uses ROS actions to communicate requests and response status between clients and server.
The idea is great, but it can scale badly with the number of clients and requests. This is due to the broadcast nature of the ROS action protocol, where each client receives status update callbacks of the currently active goals. In the case of an TF server, this is not very practical because one client node doesn't care about the status of transform requests made by other nodes (especially if it would otherwise query at a low frequency). Ultimately, this burns unnecessary CPU resources: see the "Benchmarks" section below for a synthetic example.
In essence, tf_service
simply uses persistent services instead of actions.
The server handles service requests from clients asynchronously (using a multi-threaded ros::AsyncSpinner
) and answers directly to each requesting node, thus avoiding broadcasts to unaffected clients.
Especially for Python code this can improve performance a lot.
rosrun tf_service server --num_threads 10
Starts the TF server node. The number of threads limits the number of request queues the server can handle in parallel.
Adapt this number to your requirements. Add --help
to see all options.
The client implements the normal TF2 buffer interface.
The Python bindings are wrapped in a standard tf2_ros.BufferInterface
:
import rospy
import tf_service
# ...
buffer = tf_service.BufferClient("/tf_service")
buffer.wait_for_server()
# Use it like any other TF2 buffer.
if buffer.can_transform("map", "odom", rospy.Time(0), rospy.Duration(5)):
buffer.lookup_transform("map", "odom", rospy.Time(0), rospy.Duration(1))
Implements a standard tf2_ros::BufferInterface
:
#include "ros/ros.h"
#include "tf_service/buffer_client.h"
// ...
tf_service::BufferClient buffer("/tf_service");
buffer.wait_for_server();
// Use it like any other TF2 buffer.
std::string errstr;
if (buffer.canTransform("map", "odom", ros::Time(0), ros::Duration(1), &errstr)) {
buffer.lookupTransform("map", "odom", ros::Time(0), ros::Duration(1));
}
Service connections are persistent for performance reasons.
You can use the wait_for_server
method with a timeout in case the connection was lost, it reestablishes a persistent connection if possible.
todo
git clone
the repo into your catkin workspace, install the dependencies and run:
catkin build tf_service
The Python client implementation is much more efficient than the old tf2_ros.BufferClient
.
The test scenario are 5 clients doing requests at 10Hz. Compare the resource consumption of:
roslaunch tf_service benchmark_py.launch use_old_version:=true
with the new implementation from this package:
roslaunch tf_service benchmark_py.launch use_old_version:=false
Service calls are blocking and should be "short".
To reduce the risk that clients can brick the server, timeouts greater than 10 seconds are not allowed.
You can adjust this time using the --max_timeout
flag of the server executable.
If you need very long timeouts, you might be better off with the old action-client based implementation from TF2.
Note that you can provide the old interface also with tf_service's server by passing --add_legacy_server
,
optionally with an extra namespace via --legacy_server_namespace /tf2_buffer_server
.
Third-party notice:
Directly bundled / derived third party code is mentioned in the NOTICE
file.
All other ROS / system dependencies are listed in the package.xml
file.