Skip to content

ozheng1993/TrafficSafetyGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 

Repository files navigation

Large Language Models (LLMs) have shown remarkable effectiveness in various general-domain natural language processing (NLP) tasks. However, their performance in transportation safety domain tasks has been suboptimal, primarily attributed to the requirement for specialized transportation safety expertise in generating accurate responses. To address this challenge, we introduce TrafficSafetyGPT, a novel LLAMA-based model, which has undergone supervised fine-tuning using TrafficSafety-2K dataset which has human labels from government produced guiding books and ChatGPT-generated instruction-output pairs.

Ou Zheng1, Mohamed Abdel-Aty2, Dongdong Wang3, Chenzhu Wang4, Shengxuan Ding5

Custom badge License Python 3.9+

Resources List

UCF Traffic Safety data from NSTHA Model Minimum Uniform Crash Criteria (MMUCC) Guideline Fourth edition, FHWA The Highway Safety Manual (HSM)TrafficSafety-2K .

Stanford Alpaca data for basic conversational capabilities. Alpaca link.

How to fine-tuning

We fine-tune our models using standard Hugging Face training code. We fine-tune LLaMA-7B with the following hyperparameters:

Hyperparameter LLaMA-7B
Batch size 128
Learning rate 2e-5
Epochs 3
Max length 512
Weight decay 0
torchrun 

Fine-tuning with Lora

How to inference

You can build a ChatDoctor model on your own machine and communicate with it.

Acknowledgments

We would like to thank the authors and developers of the following projects, this project is built upon these great projects.

Reference

TrafficSafetyGPT: Tuning a Pre-trained Large Language Model to a Domain-Specific Expert in Transportation Safety

@misc{zheng2023trafficsafetygpt,
      title={TrafficSafetyGPT: Tuning a Pre-trained Large Language Model to a Domain-Specific Expert in Transportation Safety}, 
      author={Ou Zheng and Mohamed Abdel-Aty and Dongdong Wang and Chenzhu Wang and Shengxuan Ding},
      year={2023},
      eprint={2307.15311},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published