Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed spurious CloneException errors in planning stack. #298

Merged
merged 2 commits into from
Apr 24, 2016

Conversation

psigen
Copy link
Member

@psigen psigen commented Apr 19, 2016

Due to OpenRAVE weirdness, we are forced to throw a CloneException if a robot is cloned from a self-colliding configuration. We also clone at the beginning of some of our planning operations.

This means that calling a planning method while a robot is in self-collision can create a CloneException, which is generally not expected or handled in code that is calling a planning method. It is also not clear to calling code that this is actually a failure within a planner, and it breaks MetaPlanner logic.

In this PR, ClonedPlanningMethods will catch CloneExceptions and wrap them as ClonedPlanningErrors instead, which inherit from PlanningError and are thus handled correctly throughout the planning stack as a form of planning failure.

Due to OpenRAVE weirdness, we are forced to throw a CloneException
if a robot is cloned from a self-colliding configuration.  We also
clone at the beginning of our planning operations.

This means that calling a planning method while a robot is in self
collision would create a CloneException, which is generally not
expected or handled in code that is calling a planning method.  It
is also not clear to calling code that this is actually a failure
within a planner, and it breaks MetaPlanner logic.

In this PR, ClonedPlanningMethods will catch CloneExceptions and
wrap them as ClonedPlanningErrors instead, which inherit from
PlanningError and are thus handled correctly throughout the
planning stack as a form of planning failure.
@mkoval
Copy link
Member

mkoval commented Apr 19, 2016

I am concerned that letting meta-planners catch CloneException as a PlanningError will mask deeper issues. All ClonedPlanningMethods will fail if the start configuration is in self-collision. There are only a handful of LockedPlanningMethods, which are all fast heuristic planners, so it is unlikely that any non-trivial planning calls will succeed.

I'd rather fail fast by letting the CloneException bubble up to the user, instead of silently ignoring most of the motion planners in a Sequence. What is the motivation for this change in behavior?

@psigen
Copy link
Member Author

psigen commented Apr 19, 2016

The motivation for this is that no one actually handles CloneException anywhere right now. It's simply not something you expect to happen when making an opaque planning method call. You just haven't run into it.

If I put the robot into collision, and tell it to plan to another place, I don't expect to have to catch a CloneException to determine that I gave it an invalid starting location. I expect a PlanningError, because it failed to plan.

Note that the only reason we can get a CloneException right now is a situation in which no planner can succeed. LockedPlanningMethods are also going to fail. So I see no reason that we shouldn't be handling it the same way as any other error that makes planning fail.

@mkoval
Copy link
Member

mkoval commented Apr 21, 2016

I see your point. I was missing the obvious (in hindsight) fact that a LockedPlanner would also fail because the robot starts in self-collision. I will merge this once Travis is happy.

@psigen
Copy link
Member Author

psigen commented Apr 24, 2016

@mkoval bump.

@mkoval
Copy link
Member

mkoval commented Apr 24, 2016

Travis is happy. Merging.

@mkoval mkoval merged commit 95119fc into master Apr 24, 2016
@mkoval mkoval deleted the bugfix/cloneplanningerror branch April 24, 2016 20:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants