-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement instanced message queues with varying depth (and rework rmw_wait for subscriptions) #27
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
methylDragon
changed the title
Implement instanced message queues with varying depth
Implement instanced message queues with varying depth (and rework rmw_wait for subscriptions)
Sep 3, 2020
methylDragon
force-pushed
the
ch3/local-message-queues
branch
4 times, most recently
from
September 3, 2020 07:07
a351833
to
13c2718
Compare
gbiggs
requested changes
Sep 3, 2020
methylDragon
force-pushed
the
ch3/local-message-queues
branch
6 times, most recently
from
September 3, 2020 18:01
3a534a2
to
3303853
Compare
gbiggs
approved these changes
Sep 4, 2020
Co-authored-by: Geoffrey Biggs <gbiggs@killbots.net>
methylDragon
force-pushed
the
ch3/local-message-queues
branch
from
September 4, 2020 08:24
3303853
to
8d5d048
Compare
clalancette
added a commit
that referenced
this pull request
Nov 9, 2023
We do this by defining the TypeSupport class ourself. Signed-off-by: Chris Lalancette <clalancette@gmail.com>
Yadunund
pushed a commit
that referenced
this pull request
Jan 12, 2024
We do this by defining the TypeSupport class ourself. Signed-off-by: Chris Lalancette <clalancette@gmail.com>
ahcorde
pushed a commit
that referenced
this pull request
Oct 29, 2024
Signed-off-by: ChenYing Kuo <evshary@gmail.com>
clalancette
added a commit
that referenced
this pull request
Dec 6, 2024
* chore: configure the compiliation * chore: complete the 1st version * fix: memory leak * fix: z_error_t -> z_result_t * Fix `scouting/*/autoconnect/*` per eclipse-zenoh/zenoh@b31a410 (#3) * chore: checkout the local zenoh-c * chore: polish z_open * feat: `z_bytes_serialize_from_slice` without copy * Initialize `query_` member of `ZenohQuery` * refactor: use `z_owned_slice_t` instead * chore: adapt the latest change of zenoh-c dev/1.0.0 * chore: use `strncmp` to avoid copying * refactor: use `z_view_keyexpr_t` to avoid copying * chore: adapt the new changes from zenoh-c and fix the bug in liveliness * fix: segmentation fault due to the unallocated query memory * fix: workaround the ZID parsing issue * fix Zenoh Config read\check * adopt to recent zenoh-c API changes * fix: adapt the latest change of batching config * build: deprecate the zenohc_debug and include the zenohc dependency in the zenoh_c_vendor * Use main branch for upgrading to Zenoh 1.0 * Increase the delay in scouting (#16) * ci: fix the argument order in the style CI * refactor: use `z_id_to_string` * build: enable the unstable feature flag * build: bump up the zenoh-c commit * build: update zenoh-c version * fix: set the max size of initial query queue to `SIZE_MAX - 1` * fix: iterator memory leak * feat: update to zenoh-c 1.0.0.8 changes * chore(style): address `ament_cpplint` and `ament_uncrustiy` * fix: initiate zenoh logger * chore: apply the suggestions * chore: add the comments for the zenoh logger * fix: store and destroy the subscriber properly * chore: improve the null pointer check: NULL => nullptr * Change liveliness tokens logs from warn to debug level (#22) * fix: properly clone the pointer of query and reply to resolve the segfault in test_service__rmw_zenoh_cpp * chore: update to zenoh-c 1.0.0.9 (#23) * Thread-safe access to graph cache (#258) * refactor(api): align with latest serialization changes * chore(deps): bump up zenoh-c to 1.0.0.10 * chore(api): align with latest serialization changes * fix: correct the sub_ke and selector_ke in the querying_subscriber * fix: thread-safe publisher * Enable history option for liveliness subscriber. (#27) * refactor!: adopt the TLS config renaming * refactor: allow Zenoh session to close without dropping * fix: address the failure in rclcpp/test_wait_for_message of declaring a subscriber after the RMW has been shut down * test: close but not drop the session * fix: correct the merge * chore: Explicit false in adminspace config * fix: enable admin space in rmw router and ros nodes * Bump zenoh-c version. * Use the latest zenoh-c which fix some nav2 issues. (#31) * Update config files according to Zenoh 1.0.0 DEFAULT_CONFIG.json5 (#33) * chore(zenoh_c_vendor): bumb up zenoh-c version * refactor: remove the free_attachment * Fix unset request header writer GUID in `rmw_take_response` * fix: keyexpr is missing in the service * Avoid touching Zenoh Session while exiting. * Register function right after opening Zenoh Session. * chore(deps): bump up zenoh-c to 1.0.1 * fix: use TRUE value to configure the feature flag * fix: correct typo `attachement` to `attachment` * refactor: remove the warning of subscriber reliability QoS * Fix `z_view_string_t` to `std::string` conversion * refactor: zc_liveliness_* -> z_liveliness_* and bump up zenoh-c version * refactor: reorder the cancel functions * chore: reorder some lines of code * refactor: add `session_is_valid` check * fixup! refactor: reorder the cancel functions * fixup! refactor: zc_liveliness_* -> z_liveliness_* and bump up zenoh-c version Signed-off-by: Luca Cominardi <luca.cominardi@gmail.com> Signed-off-by: ChenYing Kuo <evshary@gmail.com> Signed-off-by: Gabriele Baldoni <gabriele.baldoni@gmail.com> Signed-off-by: Yadunund <yadunund@gmail.com> Co-authored-by: Mahmoud Mazouz <mazouz.mahmoud@outlook.com> Co-authored-by: yellowhatter <bannov.dy@gmail.com> Co-authored-by: Steven Palma <imstevenpmwork@ieee.org> Co-authored-by: Julien Enoch <julien.e@zettascale.tech> Co-authored-by: Chris Lalancette <clalancette@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR implements instanced message queues for subscriptions (so we no longer need to rely on a static message map.)
This also means that we can now maintain message queues of varying depth!
Additionally, the issue of subscriptions fighting over messages is now resolved, since each subscription controls their own message queue.
Also, all messages are passed around the queues as shared_ptrs, which means deallocation is resolved neatly via the reference counting mechanism, and we reduce instances of data copying when duplicate subscriptions are attached to a topic.
And finally, this means that we can track which specific subscriptions are ready in rmw_wait, which saves extra cycles because subscriptions that are not ready will not run their
rmw_take_message_with_info
!To test
Use https://github.com/methylDragon/zenoh_ros_examples/ and try:
Terminal 1: (Runs two subscriptions attached to /topic)
Terminal 2 (or more): (Run a publisher to /topic)
You may run more than one publisher to overload the message queue (the subscriptions are set to max queue depth 10.) You should see a warning get printed out if messages are discarded due to the queue reaching max size.
Remember that you may use
--ros-args --log-level debug
to see the debug logs.