Replies: 1 comment
-
Screenshot from the paper <PlanT: Explainable Planning Transformers via Object-Level Representations> as Kait0 mentioned: This may lead us to think more: the dataset size that 10x likes 2-3M with others' methods may also achieve such high scores or even higher (maybe). Recent NIPS work <Model-Based Imitation Learning for Urban Driving> also uses the 2.9M dataset size, I linked their issue also: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Please check this issue for more: opendilab/InterFuser#3, wayveai/mile#4
Questions: How to prove/analyze whether it's the method or the large data that brings the boost to the performance? if the dataset size is 6x-10x larger than others.
Table about the dataset size (copy the whole dataset size frames) and DS score, I linked each method with their GitHub link here:
*: only count in the online leaderboard, not paper result table here, x means no data in the online leaderboard
Hint: 1M = 1,000 K
If you noticed other methods, please leave a comment, I will update it here.
This message is left for people who want to do this task: architecture and methods are important, but for this task, the way to collect dataset (expert), dataset size should be also considered when making the comparison.
Beta Was this translation helpful? Give feedback.
All reactions