The World's First Perception and Reasoning Benchmark for Scene Structure in Autonomous Driving.
- Paper (Accepted at NeurIPS 2023 Track Datasets and Benchmarks)
- CVPR 2023 Autonomous Driving Challenge - OpenLane Topology Track
- CVPR 2024 Autonomous Grand Challenge - Mapless Driving Track
- Point of contact: Huijie (ηζζ°) or Tianyu (ζ倩羽)
We maintain a leaderboard and test server on the task of Driving Scene Topology. If you wish to add new / modify results to the leaderboard, please drop us an email.
We maintain a leaderboard and test server on the task of OpenLane Topology. If you wish to add new / modify results to the leaderboard, please drop us an email following the instructions here.
- News
- Introducing
OpenLane-V2 Update
- Task and Evaluation
- Highlights
- Getting Started
- License & Citation
- Related Resources
Note
The difference between
v1.x
andv2.x
is that we updated APIs and materials on lane segment and SD map inv2.x
.βοΈUpdate on evaluation metrics led to differences in TOP scores between
vx.1
(v1.1
,v2.1
) andvx.0
(v1.0
,v2.0
). We encourage the use ofvx.1
metrics. For more details please see issue #76.
2024/06/01
The Autonomous Grand Challenge wraps up.2024/03/01
We are hosting CVPR 2024 Autonomous Grand Challenge.2023/11/01
Devkitv2.1.0
andv1.1.0
released.2023/08/28
Datasetsubset_B
released.2023/07/21
Datasetv2.0
and Devkitv2.0.0
released.2023/07/05
The test server of OpenLane Topology is re-opened.2023/06/01
The Challenge at the CVPR 2023 Workshop wraps up.2023/04/21
A baseline based on InternImage released. Check out here.2023/04/20
OpenLane-V2 paper is available on arXiv.2023/02/15
Datasetv1.0
, Devkitv1.0.0
, and baseline model released.2023/01/15
Initial OpenLane-V2 dataset samplev0.1
released.
We are happy to announce an important update to the OpenLane family, featuring two sets of additional data and annotations.
Map Element Bucket
. We provide a diverse span of road elements (as abucket
) to build the driving scene - on par with all elements in HD Map. Armed with the newly introduced lane segment representations, we unify various map elements to incorporate comprehensive aspects of the captured static scenes to empower DriveAGI.
π The proposed lane segment representation is published with LaneSegNet in ICLR 2024!
Standard-definition (SD) Map
. As a new sensor input, SD Map supplements multi-view images with topological and positional priors to strengthen structural acknowledge in the neural networks.
Given sensor inputs, lane segments are required to be perceived, instead of lane centerlines in the task of OpenLane Topology. Besides, pedestrian crossings and road boundaries are also desired to build a comprehensive understanding of the driving scenes. The OpenLane-V2 UniScore (OLUS) is utilized to summarize model performance in all aspects.
Given sensor inputs, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously. In this task, we use OpenLane-V2 Score (OLS) to evaluate model performance.
One of the superior formulations in the bucket is Lane Segment. It serves as a unifying and versatile representation of lanes, paving the way for multiple downstream applications. With the introduction of SD Map, the autonomous driving system is capable of utilizing these informative priors for achieving satisfactory performance in perception and reasoning.
The following table sums up a detailed comparison of different lane formulations
to achieve various functionalities.
Lane Formulation | Functionality | |||||||
3D Space | Laneline Cateogry | Lane Direction | Drivable Area | Lane-level Drivable Area | Lane-lane Topology | Bind to Traffic Element | Laneline-less | |
2D Laneline | β | |||||||
3D Laneline | β | β | ||||||
Online (pseudo) HD Map | β | β | ||||||
Lane Centerline | β | β | β | β | β | |||
Lane Segment (newly released) | β | β | β | β | β | β | β | β |
- 3D Space: whether the perceived entities are represented in the 3D space.
- Laneline Category: categories of the visible laneline, such as solid and dash.
- Lane Direction: the driving direction that vehicles need to follow in a particular lane.
- Drivable Area: the entire area where vehicles are allowed to drive.
- Lane-level Drivable Area: drivable area of a single lane, which restricts vehicles from trespassing neighboring lanes.
- Lane-lane Topology: connectivity of lanes that builds the lane network to provide routing information.
- Bind to Traffic Element: correspondence to traffic elements, which provide regulations according to traffic rules.
- Laneline-less: the ability to provide guidance in areas where no visible laneline is available, such as intersections.
Previous datasets annotate lanes on images in the perspective view. Such a type of 2D annotation is insufficient to fulfill real-world requirements. Following the OpenLane-V1 practice, we annotate lanes in 3D space to reflect the geometric properties in the real 3D world.
Not only preventing collision but also facilitating efficiency is essential. Vehicles follow predefined traffic rules for self-disciplining and cooperating with others to ensure a safe and efficient traffic system. Traffic elements on the roads, such as traffic lights and road signs, provide practical and real-time information.
A traffic element is only valid for its corresponding lanes. Following the wrong signals would be catastrophic. Also, lanes have their predecessors and successors to build the map. Autonomous vehicles are required to reason about the topology relationships to drive in the right way.
Prior to using the OpenLane-V2 dataset, you should agree to the terms of use of the nuScenes and Argoverse 2 datasets respectively. OpenLane-V2 is distributed under CC BY-NC-SA 4.0 license. All code within this repository is under Apache License 2.0.
Please use the following citation when referencing OpenLane-V2:
@inproceedings{wang2023openlanev2,
title={OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping},
author={Wang, Huijie and Li, Tianyu and Li, Yang and Chen, Li and Sima, Chonghao and Liu, Zhenbo and Wang, Bangjun and Jia, Peijin and Wang, Yuting and Jiang, Shengyin and Wen, Feng and Xu, Hang and Luo, Ping and Yan, Junchi and Zhang, Wei and Li, Hongyang},
booktitle={NeurIPS},
year={2023}
}
@article{li2023toponet,
title={Graph-based Topology Reasoning for Driving Scenes},
author={Li, Tianyu and Chen, Li and Wang, Huijie and Li, Yang and Yang, Jiazhi and Geng, Xiangwei and Jiang, Shengyin and Wang, Yuting and Xu, Hang and Xu, Chunjing and Yan, Junchi and Luo, Ping and Li, Hongyang},
journal={arXiv preprint arXiv:2304.05277},
year={2023}
}
@inproceedings{li2023lanesegnet,
title={LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving},
author={Li, Tianyu and Jia, Peijin and Wang, Bangjun and Chen, Li and Jiang, Kun and Yan, Junchi and Li, Hongyang},
booktitle={ICLR},
year={2024}
}