Official PyTorch implementation of LATR: 3D Lane Detection from Monocular Images with Transformer
Code will be released.
Models | F1 | Accuracy | X error near | far |
Z-error near | far |
---|---|---|---|---|
3DLaneNet | 44.1 | - | 0.479 | 0.572 | 0.367 | 0.443 |
GenLaneNet | 32.3 | - | 0.593 | 0.494 | 0.140 | 0.195 |
Cond-IPM | 36.3 | - | 0.563 | 1.080 | 0.421 | 0.892 |
PersFormer | 50.5 | 89.5 | 0.319 | 0.325 | 0.112 | 0.141 |
CurveFormer | 50.5 | - | 0.340 | 0.772 | 0.207 | 0.651 |
PersFormer-Res50 | 53.0 | 89.2 | 0.321 | 0.303 | 0.085 | 0.118 |
LATR-Lite | 61.5 | 91.9 | 0.225 | 0.249 | 0.073 | 0.106 |
LATR | 61.9 | 92.0 | 0.219 | 0.259 | 0.075 | 0.104 |
Plaes kindly refer to our paper for the performance on other scenes.
Scene | Models | F1 | AP | X error near | far |
Z error near | far |
Balanced Scene | 3DLaneNet | 86.4 | 89.3 | 0.068 | 0.477 | 0.015 | 0.202 |
GenLaneNet | 88.1 | 90.1 | 0.061 | 0.496 | 0.012 | 0.214 | |
CLGo | 91.9 | 94.2 | 0.061 | 0.361 | 0.029 | 0.250 | |
PersFormer | 92.9 | - | 0.054 | 0.356 | 0.010 | 0.234 | |
GP | 91.9 | 93.8 | 0.049 | 0.387 | 0.008 | 0.213 | |
CurveFormer | 95.8 | 97.3 | 0.078 | 0.326 | 0.018 | 0.219 | |
LATR-Lite | 96.5 | 97.8 | 0.035 | 0.283 | 0.012 | 0.209 | |
LATR | 96.8 | 97.9 | 0.022 | 0.253 | 0.007 | 0.202 |
Method | F1 | Precision(%) | Recall(%) | CD error(m) |
---|---|---|---|---|
3DLaneNet | 44.73 | 61.46 | 35.16 | 0.127 |
GenLaneNet | 45.59 | 63.95 | 35.42 | 0.121 |
SALAD | 64.07 | 75.90 | 55.42 | 0.098 |
PersFormer | 72.07 | 77.82 | 67.11 | 0.086 |
LATR | 80.59 | 86.12 | 75.73 | 0.052 |
This library is inspired by OpenLane, GenLaneNet, mmdetection3d, SparseInst, ONCE and many other related works, we thank them for sharing the code and datasets.
If you find LATR is useful, please cite:
@article{luo2023latr,
title={LATR: 3D Lane Detection from Monocular Images with Transformer},
author={Luo, Yueru and Zheng, Chaoda and Yan, Xu and Kun, Tang and Zheng, Chao and Cui, Shuguang and Li, Zhen},
journal={arXiv preprint arXiv:2308.04583},
year={2023}
}