Skip to content

Latest commit

 

History

History
34 lines (24 loc) · 1.76 KB

File metadata and controls

34 lines (24 loc) · 1.76 KB

Adversarial Attacks

Here're some resources about Adversarial Attacks

Intros:

  • Adversarial attacks in autonomous driving refer to attempts to fool or manipulate the systems of autonomous vehicles (AVs) using carefully crafted inputs known as adversarial examples. These examples are usually formed by introducing small perturbations to normal inputs (like images or sensor data) which lead the AV's machine learning models to make incorrect predictions or interpretations.

  • The potential for adversarial attacks on AVs highlights the need for robustness in the design of autonomous systems. Research in this field focuses on understanding the nature of these attacks, their implications, and developing methods to mitigate their effects. Techniques like adversarial training (where the model is trained with adversarial examples), defensive distillation, and feature squeezing are among the methods used to increase the robustness of machine learning models against adversarial attacks. It's an active area of research given the high-stakes nature of autonomous driving and the need to ensure safety and reliability under all conditions.

Table of Contents


Deep learning-based autonomous driving systems: A survey of attacks and defenses [READ]

paper link: here

citation:

@article{deng2021deep,
  title={Deep learning-based autonomous driving systems: A survey of attacks and defenses},
  author={Deng, Yao and Zhang, Tiehua and Lou, Guannan and Zheng, Xi and Jin, Jiong and Han, Qing-Long},
  journal={IEEE Transactions on Industrial Informatics},
  volume={17},
  number={12},
  pages={7897--7912},
  year={2021},
  publisher={IEEE}
}