Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A problem about the experiments. #34

Open
Nimo-zl opened this issue Sep 27, 2022 · 5 comments
Open

A problem about the experiments. #34

Nimo-zl opened this issue Sep 27, 2022 · 5 comments

Comments

@Nimo-zl
Copy link

Nimo-zl commented Sep 27, 2022

Thank you for sharing the great work! I have a confusion about the experiments:

The running results with this code seem far from the result in the paper. All params are default, after 16 training epochs:
loss=4.445
FDE: 3.1506308334590956
MR(2m,4m,6m): (0.4871615584819174, 0.2329565318727421, 0.13079283688769763)

What could be the problem?

@GentleSmile
Copy link
Member

This is result of single trajectory prediction. Performing evaluation will get results of multi-trajectory prediction.

@Nimo-zl
Copy link
Author

Nimo-zl commented Sep 27, 2022

Thank you for your reply! And I have another question:

In evaluate step, it contians "optimize miss rate" and "optimize minFDE", so how can they be optimized after training? And how does the param "MRminFDE" working?

@GentleSmile
Copy link
Member

Optimizing strategy during evaluation can be found in the paper. MRminFDE is used to control the ratio of optimizing miss rate. For example, "0.5" means optimizing miss rate and minFDE simultaneously.

@Nimo-zl
Copy link
Author

Nimo-zl commented Oct 8, 2022

Thank you for your reply!

It seems that the results of multi-traj is much better than single-traj, so why is this? During evaluation, how to calculating miss rate and FDE? the average of multi-traj, or the top1? I am so sorry that can't find this in the code.

@GentleSmile
Copy link
Member

We use argoverse library to calculate metrics. See https://eval.ai/web/challenges/challenge-page/454/evaluation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants