Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for rosbag play not publishing multiple latched messages on the same topic #1004

Closed
wants to merge 1 commit into from

Conversation

racko
Copy link
Contributor

@racko racko commented Feb 26, 2017

Proof of concept for a solution to #146 (in the rosbag context).

  • Starts one child-process per latched topic callerid to publish the
    messages of this sender.
  • Shutdown behavior is strange: First Ctrl+C kills child-processes,
    second kills rosbag play.
  • Does not solve the issue of latched topics not being present in split
    bags after the first one.

@racko racko changed the title Proof of concept for a solution to #146 Fix for rosbag play not publishing multiple latched messages on the same topic Feb 26, 2017
@dirk-thomas
Copy link
Member

@ros-pull-request-builder retest this please

@dirk-thomas
Copy link
Member

The problem is that the spawned child process will operate completely independent from the original player instance. The player instance has an API to pause / resume etc. All of these commands wouldn't affect the spawned child process.

- Starts one child-process per latched topic callerid to publish the
  messages of this sender.
- Shutdown behavior is strange: First Ctrl+C kills child-processes,
  second kills rosbag play.
- Does not solve the issue of latched topics not being present in split
  bags after the first one.
@racko
Copy link
Contributor Author

racko commented Aug 13, 2017

I finally found time to try and work out the kinks in this commit, but I
get it now. I give up :)

For everyone trying something similar, here are the reasons why spawning
a child process does not solve the problem (completely):

If the topics are more "regular" (i.e. more like /tf than like
/tf_static), and the latching is just "an extra", the child process
would have to be paused and unpaused with the parent process, contrary
to my original assumptions. This could be done, but a simple,
unsynchronized implementation might behave less deterministically then
expected, while a synchronized version (all processes make sure that
they publish their messages in order) could be less performant.

The bigger problem is that it only works if every node publishes at most
one message on the latched topic: If one node has more than one
publisher latching messages on the same topic, we again run into the
issue that instead of one message per publisher we would get one message
per node. Sounds contrived, but imagine a node with multiple
tf2::StaticTransformBroadcasters. I guess it would be beneficial if the
callerid would contain a "unique ros::Publisher id". Then we could get
to the point of "one message per publisher". But if, for example, a
single StaticTransformBroadcaster is used to publish a number of
different static transformations, we'd have no way to latch every single
message because it doesn't even work that way in the recorded system:
For each StaticTransformBroadcaster we only get a single latched
message.

So in the end, providing a launch file full of
static_transform_publisher calls (or something similar) still seems to
works better than the best workaround, making it something of a waste to
keep working on it.

P.S. here's a TODO for the current code, If anybody cares:

Only works for a single latched topic per node: if we have /tf_static
and /rosout, we launch pub_tf_static twice per callerid and each will
publish both topics. Would be easy to fix by replacing threads with a
"callerid -> child" map (one process per callerid is fine because they
publish all latched topics for that callerid)

@racko racko closed this Aug 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants