This repository contains a C++ implementation for performing inference with the state-of-the-art TwinLiteNet model using OpenCV's DNN module. TwinLiteNet is a cutting-edge lane detection and drivable area segmentation model. This implementation offers support for both CUDA and CPU inference through build options.
I would like to express sincere gratitude to the creators of the TwinLiteNet model for their remarkable work .Their open-source contribution has had a profound impact on the community and has paved the way for numerous applications in autonomous driving, robotics, and beyond.Thank you for your exceptional work.
.
├── CMakeLists.txt
├── LICENSE
├── README.md
├── assets
├── include
│ └── twinlitenet_dnn.hpp
├── models
│ └── best.onnx
└── src
├── main.cpp
└── twinlitenet_dnn.cpp
- OpenCV 4.8 +
- CUDA Inference: To enable CUDA support for GPU acceleration, build with the
-DENABLE_CUDA=ON
CMake option. - CPU Inference: For CPU-based inference, no additional options are required.
- Clone this repository.
- Build the project using CMake with your preferred build options.
mkdir build
cd build
cmake -DENABLE_CUDA=ON ..
make -j8
- Execute
./main
and Enjoy accurate lane detection and drivable area results!
This project is licensed under the MIT License. Feel free to use it in both open-source and commercial applications.