You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thank you for building this amazing framework on top of MoveIt towards combined Task- and Motion Planning.
I'm trying to create a custom stage that accepts grasp poses given by human input at runtime, such as via an HTC Vive or even simply the orange virtual arm in the MotionPlanningPlugin. This custom stage should constantly listen for new grasp poses and MTC should constantly plan more trajectories whenever a new grasp pose is received. A grasp pose can be generated by the human at any time, and the maximum number of poses is unknown at compile time.
Could you offer some advise as to the best way to build such a custom stage?
The text was updated successfully, but these errors were encountered:
sounds like a nice application!
In its essence this is similar to #196 .
The pr provides a generic action client stage that requests grasp poses from an external action server.
The main difference is that you do not wait for a single result, but could work with an ongoing stream of (slowly) incoming messages. It is likely that the task planner will at some point return from Task::plan though even though you want to process further messages.
So you could
implement a stage that reads messages on a topic/action interface
implement YourStage::canCompute to check for unprocessed messages (as well as its monitored upstream_solutions_)
compute new solutions based on both in YourStage::compute
Task::plan will return when all stages' canCompute return false, so you will also want to either
retrigger plan() (I'm not sure this is working as expected right now, but feel free to file an issue) or
have your stage always return canCompute() = true and wait for a short Duration for incoming messages or
add some callback mechanism to trigger computation again only when new messages arrive.
Dear Michael and Robert,
Firstly, thank you for building this amazing framework on top of MoveIt towards combined Task- and Motion Planning.
I'm trying to create a custom stage that accepts grasp poses given by human input at runtime, such as via an HTC Vive or even simply the orange virtual arm in the MotionPlanningPlugin. This custom stage should constantly listen for new grasp poses and MTC should constantly plan more trajectories whenever a new grasp pose is received. A grasp pose can be generated by the human at any time, and the maximum number of poses is unknown at compile time.
Could you offer some advise as to the best way to build such a custom stage?
The text was updated successfully, but these errors were encountered: