Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pause asynchronous ROS updates with synchronous ones #1635

Merged

Conversation

rhaschke
Copy link
Contributor

@rhaschke rhaschke commented Jun 17, 2021

VisualizationManager uses [start|stop]Update() to pause ROS updates during dialog box interaction (#631).
However, asynchronous updates via threaded_queue_ were still performed, leading to a memory leak (#1621),
because messages were partially processed by the thread (i.e. kept in memory), but not finally processed
(and cleaned up) by the synchronous Display::update() call.
Pausing threaded processing too resolves that issue.

Fixes #1621. Fixes #868.
@simonschmeisser: Do you see any issue pausing asynchronous message processing too while displaying dialogs?

VisualizationManager uses [start|stop]Update() to pause ROS updates during dialog box interaction (ros-visualization#631).
However, asynchronous updates via threaded_queue_ were still performed, leading to a memory leak (ros-visualization#1621),
because messages were partially processed by the thread (i.e. kept in memory), but not finally processed
(and cleaned up) by the synchronous Display::update() call.
Pausing threaded processing too resolves that issue.
@simonschmeisser
Copy link
Contributor

simonschmeisser commented Jun 17, 2021 via email

@rhaschke
Copy link
Contributor Author

First, I didn't modernize any code (e.g. switching to Qt's object tree instead of manual new/delete or using Qt5-style signal+slots) and I don't think it is worth the effort.

I don't fully understand why it was disabled in 2013 and if we still need this at all?

Me neither. There was an issue back than: #629, which was solved by disabling message processing while showing dialogues.
I thought it would be safer to stick with the current behaviour. But, we could try to revert that.

#868 does not sound like a memory leak but just an unbounded queue. Doesn't the PointCloudDisplay already have a queue size that could be used to discard old messages like suggested in the comment?

You are right. It's an unbounded queue "only". However, the queue grows rather fast and thus "feels" like a mem leak at first sight. There are several queues involved: First, there is the subscriber message queue, which size is controlled by the display. Second, there is an internal queue to pass partially processed clouds from the processing thread to the main rendering thread. Its size isn't controlled (and grew without bounds).
My solution - in contrast to limiting the queue (#868 (comment)) - fixes the issue even deeper: The messages shouldn't be processed at all - not only in PointCloudCommon, but in general by all displays employing threaded_nh_.

rhaschke added 2 commits June 18, 2021 06:19
PointCloudCommon was defining its own callback queue and spinner
to allow for asynchronous processing of point cloud data.
However, as most PointCloud displays just inherit MessageFilterDisplay,
which is using the default update_nh_, eventually the default synchronous
callback queue was used.
(Although the callback queue was configured in the constructor to point
to PointCloudCommon's private queue, Display::initialize() was resetting
the queue back to the default.)
@rhaschke rhaschke changed the title Stop asynchronous ROS updates with synchronous ones Pause asynchronous ROS updates with synchronous ones Jun 18, 2021
@rhaschke rhaschke merged commit 1df798c into ros-visualization:melodic-devel Jun 19, 2021
@rhaschke rhaschke deleted the fix-asynchronous-processing branch June 19, 2021 14:32
@rhaschke rhaschke mentioned this pull request Jun 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants