Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UAV unable to replan due to terminal point of trajectory in collision #52

Open
Swadine opened this issue Mar 14, 2023 · 2 comments
Open

Comments

@Swadine
Copy link

Swadine commented Mar 14, 2023

#1 A major problem I've noticed with the planner is that it can't plan when tall or wide structures are in front of it, and because of its limited FOV, it gets stuck in an infinite loop of re-planning. This is what replan looks like in rviz -
ezgif com-video-to-gif
If you look closely in the point cloud, there is a tall obstacle in front of the drone, my theory is that its limited FOV makes it impossible to plan through.

This is what the ego_planner node continuously prints:
截图自 2023-03-13 02-17-02

The node also prints this if it's of any help, what does "base_point and control_point too close" mean?
截图自 2023-03-14 15-44-19

I wonder if we need hamiltonian perturbation to solve this problem?

#2 I also noticed the fact that although $dist=0.5$ the drone is still afraid to enter obstacles more than 2 meters most of the time, while again when it is slightly out of its range it will directly hit the obstacle when it yaws suddenly. I think this is due to lack of mapping with history.

#3 I want to ask why start_pos is different from odom_pos in ego_replan_fsm , according to me, the trajectory should start planning the drone from the current position, otherwise, if the controller lags slightly, the trajectory will run without considering the position of the drone. This raises the question, shouldn't we have closed-loop feedback between the controller and the trajectory planner? Any idea how to merge?

@bigsuperZZZX Any help in modifying the codebase is appreciated. Thank you.

@Swadine
Copy link
Author

Swadine commented Mar 19, 2023

edit:
#1 was a similar question raised here and the only solution that I could think of as of right now is that I give the drone a randomly sampled point that satisfies my goal and keep giving it a different local goal as long as it somehow explores and reaches the required goal. Do you think this approach is correct?

#3 I fixed using the discussion here.

@bigsuperZZZX
Copy link
Member

Q1: Just try it.
Q2: Try to reduce the speed and obstacle density.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants