Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about the open-loop evaluation framework. #29

Closed
flclain opened this issue May 29, 2023 · 4 comments
Closed

Some questions about the open-loop evaluation framework. #29

flclain opened this issue May 29, 2023 · 4 comments
Labels
good issue Good for newcomers question Further information is requested

Comments

@flclain
Copy link

flclain commented May 29, 2023

Congratulation for your great job and thanks for sharing the code.

In the paper "Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes", they designed an MLP-based method that takes raw sensor data (e.g., past trajectory, velocity, etc.) as input and directly outputs the future trajectory of the ego vehicle, without using any perception or prediction information such as camera images or LiDAR.
Surprisingly, such a simple method achieves state-of-the-art end-to-end planning performance on the nuScenes dataset, reducing the average L2 error by about 30%.
They concluded that maybe we need to rethink the current open-loop evaluation scheme of end-to-end autonomous driving in nuScenes.
1685343154853

What do you think of this experiment? https://github.com/E2E-AD/AD-MLP
Is there a problem with their experimental results, or we do need a new open-loop/close-loop evaluation framework?

@Yihanhu
Copy link
Collaborator

Yihanhu commented Jun 8, 2023

Thank you for your interest, and you've posed an excellent question that has indeed been widely debated. I will try to address your question from four perspectives:

Firstly, open-loop evaluation is a frequently employed measure in machine learning-based planning and has a strong positive correlation with closed-loop metrics. For example, in the NuPlan competition, the open loop is used as one of the reference indicators. Most algorithms can demonstrate good consistency.

Secondly, the introduction of the vehicle's historical state information in the process of imitation learning can cause a phenomenon known as 'causal confusion'. While incorporating history does indeed enhance open-loop indicators, it significantly reduces closed-loop performance, thereby undermining its reference value. Similarly, in the experiment you mentioned, the introduction of historical information has caused the open-loop indicators to lose their reference value.

Thirdly, our choice of the Nuscenes dataset was driven by it being the largest and most comprehensive dataset currently available for end-to-end scenarios. However, this dataset is still too small for planning purposes. We hope to see the emergence of larger-scale end-to-end autonomous driving datasets in the future (such as NuPlan) to facilitate further end-to-end related research.

Lastly, there is no well-established evaluation method for end-to-end algorithms for autonomous driving scenarios based on real data. Most algorithms are only tested on open-source simulators such as CARLA. However, the simulation effects of CARLA and real data have a considerable 'domain gap', making the evaluation metrics somewhat limited in their implications. Additionally, there is a lack of expert data for training in the CARLA simulator. Current research involves manually creating a very naive rule-based expert system to generate Ground Truth (GT), but this expert system is far removed from human driver behavior. This leads to significant limitations of this benchmark and it does not effectively validate imitation learning algorithms. The industry has introduced many excellent evaluation systems based on real data, such as Nvidia's Drive Sim and Wabbi's UniSim. These are commendable initiatives, but sadly, they are not open-source, so we cannot test or experiment on these simulators.

Personally, I believe that closed-loop evaluations based on real data will be crucial in addressing autonomous driving issues in the future. We also have plans in this area for the future, and we hope to see more open-source work in this field to aid academic research.

@Yihanhu Yihanhu added good issue Good for newcomers question Further information is requested labels Jun 8, 2023
@flclain
Copy link
Author

flclain commented Jun 12, 2023

Thank you very much for your detailed reply !

@YTEP-ZHI
Copy link
Collaborator

I'm closing this issue as it's settled , feel free to reopen it if needed @flclain.

@ilnehc
Copy link
Contributor

ilnehc commented Sep 13, 2023

A few points should be stated clearly:

UniAD reports the planning results for each timestamp, ie, L2_2s is the L2 error at timestamp 2s.

However, the planning metrics in VAD and AD-MLP adopts the code from ST-P3. For the reported L2 and Col at each time stamp, they average the results from time 0 to the concerned timestamp, ie, L2_2s = avg(L2_0.5s, L2_1s, L2_1.5s, L2_2s), and the final Avg. L2 value is the average of the already-averaged 3 values.

This will not change the finding that AD-MLP still gets very good results over carefully designed models (its Avg. L2 under the UniAD setting is the L2_3s in the AD-MLP paper). This just means directly taking numbers from the papers will lead to some inconsistency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good issue Good for newcomers question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants