Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Futures/coroutines for services #135

Closed
wants to merge 31 commits into from

Conversation

sloretz
Copy link
Contributor

@sloretz sloretz commented Oct 30, 2017

Note:Ignore this PR. A lot of it has already been split into separate pull requests. I'll close this when there is nothing left to split.

This is part of service feature parity. It is not ready for review, but feedback is welcome if you can tolerate the chaotic writing below. Once #127 is merged this will need to be rebased onto master (omitting d980051).

Futures are a way to support multiple simultaneous client requests. Coroutines are a way to support futures inside of callbacks in an executor. Futures need a task running in parallel to update the result. A task can get to the executor via

  • a callback in the executor calling await future
    • Future.__await__ can invoke the task, and yield if the result is not ready yet.
    • Callback itself must be a coroutine (i.e. async def)
  • code outside giving the future to the executor directly
    • Add an equivalent of rclcpp::Executor::spin_until_future_complete(future)
      • In asyncio everything has a reference to the event loop so adding a task is easy, but not everything has a reference to the executor in rclpy. One possibility is a future having a reference to a task that needs to be executed to complete it. spin_until_future_complete could add that task to the executor.

Current state:

  • Added executor support for coroutines
    • async def/await only. Does not support generator/yield from coroutines (but it could)
    • executor wait_for_ready_callbacks yields new callbacks first, then old coroutines that yielded the last time they were run.
      • Requires an executor to run the iterator returned from wait_for_ready_callbacks until StopIteration or old coroutines may never get resumed.
      • Maybe for some cases it makes sense to interleave new tasks with old tasks? I think wait_for_ready_callbacks should only yield new tasks, and another method should yield old ones. Custom executors should be responsible for implementing their own old-vs-new task strategy.
    • Service callbacks cannot be coroutines. (not impossible; haven't figured out a clean implementation).
  • Executor yields a Task instance
    • Task class hides the difference between a normal function and a coroutine from an executor
    • Task done callbacks get executed in series after the original callback/coroutine finishes
      • Task is greedy and executes as many done callbacks as it can, pausing only if one is a coroutine that yields.
      • asyncio.Task done callbacks get scheduled in the event loop, but this would be a problem in rclpy because it could mean another rcl_wait before executing a done callback.
    • Not using asyncio.Task because it is not thread safe, and it expects a reference to an asyncio event loop
  • Does not yet have a Future class
    • asyncio.Future is await-able but not thread safe and expects an event loop reference (though partially works without it)
    • concurrent.futures.Future is thread safe but not await-able (Maybe could be subclassed for this)
    • In asyncio.Task is a subclass of future, so maybe a Task could be it's own future too?

connects to #123

@sloretz sloretz added the in progress Actively being worked on (Kanban column) label Oct 30, 2017
@sloretz sloretz self-assigned this Oct 30, 2017
@sloretz sloretz mentioned this pull request Oct 31, 2017
@sloretz sloretz force-pushed the rclpy_wait_for_service_async branch from cb1b4a3 to 50f0a27 Compare November 15, 2017 00:16
Moved wait set code to its own class for code reuse

Added timeout_sec_to_nsec()

wait_for_service() implemented with timers

Added unit tests for timeout_sec_to_nsec()

Added test for WaitSet class

Use negative timeouts to mean block forever

Double quotes to single quotes

Added wait_for_service() tests and fixed bugs it caught

Eliminate blind exception warning

Reduce flakiness of test by increasing time to 0.1s

Comment says negative timeouts block forever

Use :returns:

Move add_subscriptions()

arugments -> arguments

Daemon as keyword arg

Remove unnecessary namespace argument

Use S_TO_NS in test

More tests using S_TO_NS

Use monotonic clock for testing timer time

Increased test timeout by 30 seconds

CheckExact -> IsValid

Fixed wait_set not clearing ready_pointers

Remove unnecessary namespace keyword arg

Non-blocking wait

remove expression that always evaluates to True

Raise ValueError on invalid capsule

Simplified timeout_sec_to_nsec

Added 'WaitSet.destroy()' and made executor use it

GraphListener periodically checks if rclpy is shutdown

Misc fixes after pycapsule names

Wait set class always clears entities before waiting

Remove underscore on import

Reformat timeout line

Use () when raising exceptions

Removed _ on imports

Executor optimizations

~5% less overhead in wait_for_ready_callbacks()

Fixed executor yielding entities to wrong node

Also refactored some code to a sub-generator

Use list() only where necessary

Docstring in imperitive mood

Executors reuse iterator

moved some wait_set code into C

Avoid another list comprehension

Replaced WaitSet with C code in executor

Remove test code

Use lists instead of set

Use locally defined function instead of member function

Shorten code using macro

Move everything to new wait_set code

protect against ImportError.path being None (#134)

Free memory when things don't go as planned (#138)
Python library _rclpy could not be imported due to rclpy_sigint.dll not
being on PATH. This uses 'APPEND_LIBRARY_DIRS' argument to
ament_add_nose_test to add that directory to the path.
@sloretz sloretz force-pushed the rclpy_wait_for_service_async branch from 50f0a27 to 0f0e20a Compare November 17, 2017 18:18
@sloretz sloretz mentioned this pull request Nov 27, 2017
@sloretz sloretz mentioned this pull request Jan 4, 2018
@sloretz
Copy link
Contributor Author

sloretz commented Jan 4, 2018

Closing since it was split with the wait set refactor in #140 and the future/coroutine support in #166.

@sloretz sloretz closed this Jan 4, 2018
@sloretz sloretz removed the in progress Actively being worked on (Kanban column) label Jan 4, 2018
@sloretz sloretz deleted the rclpy_wait_for_service_async branch January 4, 2018 16:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant