-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Direct Task Assignment #158
Conversation
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Yes, this is a vulnerability of the FIFO approach which we discussed with our collaborators. The conclusion from that discussion was that it's the requester's responsibility to make sure that tasks are submitted in the correct order. I think the intended use case was that the requester would just stream in the requests right when the task should be performed, and the time field would be virtually irrelevant. Admittedly that does not jive well with the ability for the requester to also set a start time. With that in mind, we could do something slightly different than FIFO: Whenever the robot is available, we choose the task with the lowest requested start time. Ties will be broken according to the FIFO order that the requests were received in. I don't believe we should be using the task planner, since the purpose of this direct request pipeline was to skip all optimization, as we expect the requester to have already optimized. I don't think we should go with a priority scheme, since even just reordering the direct requests according to our own optimization criteria could interfere with the behavior that the requester intended.
Based on our discussions with collaborators, the behavior that's wanted for now is to simply not perform any dispatched requests on a robot until all the robot's direct requests are finished. This does pose a risk that dispatched requests could be needlessly stuck or delayed, but for now we are viewing that as an inherent risk of trying to mix these systems together. I think we can consider mitigation strategies (e.g. send a task request back to the dispatcher to be reconsidered) in a later release after this first draft of the feature. |
Thanks for the feedback @mxgrey .
I assumed we would still want to auto-include
Noted. I'll update the implementation to follow this behavior. |
This is a tricky question. IIRC the task manager has a behavior where if the robot is idle and the battery is below a threshold then it will automatically tell the robot to charge, correct? If so, then we can start out by relying on this combined with responsible separation of tasks requests by the requester to keep the battery charged. Admittedly that's not a robust system, so in a future release we could make some changes to the task planner that allow us to constrain the order that tasks occur in. If we tell the task planner to constrain the order of all direct tasks then the planner's output will simply be those tasks in their original order but with charge battery tasks mixed in. But I would save this for future work. |
Yes should be possible. Just need to tweak the implementation to add this task to the direct task queue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left a few remarks to look over.
rmf_fleet_adapter/src/rmf_fleet_adapter/agv/FleetUpdateHandle.cpp
Outdated
Show resolved
Hide resolved
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
@mxgrey this PR is ready for a review. Test results:
dispatch-2022-01-26_19.00.16.mp4But if we submit the task via a direct api request, it gets assigned to the robot requested,
direct-2022-01-26_19.01.15.mp4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some minor remarks, but I don't see any of them as blockers.
I really like how the abstract priority system has allowed you to easily get the FIFO outcome out of the task planner without any changes to the task planner itself 👍
@@ -190,13 +231,19 @@ class TaskManager : public std::enable_shared_from_this<TaskManager> | |||
friend class ActiveTask; | |||
|
|||
agv::RobotContextPtr _context; | |||
std::weak_ptr<BroadcastClient> _broadcast_client; | |||
std::optional<std::weak_ptr<BroadcastClient>> _broadcast_client; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious, what's the motive behind having std::optional<std::weak_ptr<T>>
here? An ordinary std::weak_ptr
is nullable, so what extra layer of expression do we achieve by wrapping a std::optional
around it? I don't think there's anything wrong with it; I'm just not sure what it's meant to express.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to come up with a way to distinguish the situation where we we get a nullptr
when we try to lock the weak_ptr
vs we're able to lock, but the BroadcastClient
was never initialized (server_uri not provided) and hence is a nullptr
. If there is a more elegant way to do this, please let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that makes perfect sense, thanks for the explanation 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, some more thoughts, though... Why would it be a std::weak_ptr
to the broadcast client? Generally std::weak_ptr
is used to avoid circular dependencies or in cases where something just wants to observe the lifetime of a resource without interfering with its lifecycle. I don't think either of those is the case here. So maybe this should just generally be a std::shared_ptr<BroadcastClient>
?
If we have cases where there might be a delay in the availability of a BroadcastClient
instance, then we can consider using std::shared_ptr<std::shared_ptr<BroadcastClient>>
so that the inner std::shared_ptr
can be updated at any time, and everything holding the outer std::shared_ptr
will automatically get access to the new resource.
These aren't things we need to worry about for this release, but maybe thoughts to consider for the next iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops sorry I merged it too quickly.
Ah yeah the rationale was to deal with the delay in availability of the BraodcastClient
. I will add a comment based on what you mentioned as a TODO for future reference 👍🏼
Signed-off-by: Yadunund <yadunund@openrobotics.org>
This PR introduces the ability to directly assign a task to a robot in a fleet, thereby bypassing the bidding system. Users can publish an
rmf_task_msgs::msg::ApiRequest
msg withjson_msg
field following the schema introduced in open-rmf/rmf_api_msgs#15 over the topic/task_api_requests
The implementation adds a
_direct_queue
inTaskManager
which maintains the assignments for direct tasks. One point about this appraoch:_queue
and_direct_queue
introduces the complexity of checking which queue to process in several functions including_begin_next_task()
,expected_finish_state()
,remove_task_from_queue()
. I think things could be a lot more elegant if we just maintained a single_queue
. We could do this if the Task Planner was capable of generating assignments where direct tasks are assigned before dispatched ones. One way of doing this would be to introduce a new priority scheme which also incorporates the existing binary scheme. Direct tasks would be automatically assigned a "high" priority such that they are assigned before dispatched ones. And the binary high/low priority could till apply within the dispatched and assigned tasks. Another advantage of this would be that the planner's time segmentation would allow for a natural distribution of the dispatch and direct tasks. So there wouldn't be a confusion on whether to begin a dispatch or direct task next.But right now, I'm trying to make things work with the separate queues and have the following questions:
1) Initially I thought the direct assignments should be in order that they were received. But if that is the case, it could be troublesome if a user submitted a direct task to begin one hour from now first and then another direct task to begin immediately. But given this sequence, the first task will be at the top of the queue and the second one will only begin after this one completes- which is unproductive. Hence, i'm currently replanning with all the task in
_direct_queue
as this would naturally time segment the assignments.2) Decision on which task to begin: Say we have tasks queued in both
_queue
and_direct_queue
. The top of thedirect_queue
has a deployment time in the future while the one in_queue
is ready to be deployed. Should we start the one in_queue
or always complete all the_direct_queue
tasks before commencing the dispatch ones? This could happen in a scenario where the fleet has already been awarded a dispatch task (which is now queued) and then receives a direct task request after.A lot of these problems can be solved trivially if we have a configuration option for the fleet adapter to respond to either
dispatch
ordirect
task requests but not both. Mixing the two capabilities is tricky as the behaviour may favour one deployment scenario but not the other.Please let me know your thoughts @mxgrey @cnboonhan