Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider log-form distribution #94

Open
mkolodziejczyk-piap opened this issue Oct 2, 2022 · 7 comments
Open

Consider log-form distribution #94

mkolodziejczyk-piap opened this issue Oct 2, 2022 · 7 comments
Labels
enhancement New feature or request

Comments

@mkolodziejczyk-piap
Copy link

https://clearpathrobotics.com/blog/2022/05/indiana-university-explores-collision-free-navigation-in-cluttered-environments/

Seems like they are using nav stack

@SteveMacenski
Copy link
Collaborator

SteveMacenski commented Oct 3, 2022

Thanks for sharing! This is the paper https://arxiv.org/pdf/2203.16599.pdf

Might be worth looking at the log-distribution, but also reading the results I'm not particularly impressed. Using the changes in distributions, sure, would improve smoothness, but that's not really going to solve it "enough" to call things 'smooth' without other actions. Also they don't really compare apples-to-apples in making the gaussian distribution on-par with the distribution of trajectories for their distribution in experimentation. Never the less, most of the metrics was < 1% different so its really difficult for me to say with any degree of certainty that those changes matter. I suspect the only reason for the few failures of normal MPPI is an intentional decision to set the distribution parameters so poorly as to make it so.

Seems like a potential new option we could offer once this is 'done' for a hardware testing phase, but I don't think it's going to necessarily be an objective improvement. Though, what I like about this paper is that its a model for potentially how we can structure a paper regarding this work in this repository if we choose to. I think the novelty of this work is in the cost function design and add benchmarking against existing trajectory planners that are commonly used.

Thanks for sharing this! Certainly got my brain moving a bit. The variation described is also trivial to implement to it might be worth the couple of hours just to test and see. Though, I'm quite surprised this made it through RAL. Its well put together, but the results and actual change are pretty unimpressive - more of a conference paper.

@SteveMacenski
Copy link
Collaborator

SteveMacenski commented Oct 6, 2022

Closing - but thanks for bringing to our attention! Actually, I'll just retarget the ticket towards trying that out

@SteveMacenski SteveMacenski reopened this Oct 6, 2022
@SteveMacenski SteveMacenski changed the title worth reading Consider log-form distribution Oct 6, 2022
@SteveMacenski SteveMacenski added the enhancement New feature or request label Oct 7, 2022
@SteveMacenski
Copy link
Collaborator

@mkolodziejczyk-piap I saw you have a fork using torch -- does that help in performance? Just curious if we should consider that here.

@mkolodziejczyk-piap
Copy link
Author

Actually I gave up the idea to use torch (for many reasons) in favour of arrayfire (with flashlight in the future). My branch af is still WIP and need rebase, but I'm planning to come back to dev after some break. Apart from that, I'm planning to also modify Costmap2D as a af::array GPU grid, based on hypergrid project, to have fast footprint collision detection. Here's my TODO. So, still a lot to do to make it work

@artofnothingness
Copy link
Owner

artofnothingness commented Oct 25, 2022

What were the reasons for not using torch?

@mkolodziejczyk-piap
Copy link
Author

Mostly because libtorch doesn't support JIT and vectorization. I also feel that the pace of development of the project slowed down recently and it's purpose is mainly to execute pytorch models in C++. Nevertheless, when I get to model execution I'll consider using libtorch models (jitted in python) instead of flashlight.

@SteveMacenski
Copy link
Collaborator

Well do let us know if you find you can get better performance with either libraries. We're pretty happy with xtensor, but obviously faster is always better. I should be back on working on this work once I get through my backlog from Japan (maybe still a week or two).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants