ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18990
14
0

Safety Interventions against Adversarial Patches in an Open-Source Driver Assistance System

26 April 2025
Cheng Chen
Grant Xiao
Daehyun Lee
Lishan Yang
E. Smirni
H. Alemzadeh
Xugui Zhou
    AAML
ArXivPDFHTML
Abstract

Drivers are becoming increasingly reliant on advanced driver assistance systems (ADAS) as autonomous driving technology becomes more popular and developed with advanced safety features to enhance road safety. However, the increasing complexity of the ADAS makes autonomous vehicles (AVs) more exposed to attacks and accidental faults. In this paper, we evaluate the resilience of a widely used ADAS against safety-critical attacks that target perception inputs. Various safety mechanisms are simulated to assess their impact on mitigating attacks and enhancing ADAS resilience. Experimental results highlight the importance of timely intervention by human drivers and automated safety mechanisms in preventing accidents in both driving and lateral directions and the need to resolve conflicts among safety interventions to enhance system resilience and reliability.

View on arXiv
@article{chen2025_2504.18990,
  title={ Safety Interventions against Adversarial Patches in an Open-Source Driver Assistance System },
  author={ Cheng Chen and Grant Xiao and Daehyun Lee and Lishan Yang and Evgenia Smirni and Homa Alemzadeh and Xugui Zhou },
  journal={arXiv preprint arXiv:2504.18990},
  year={ 2025 }
}
Comments on this paper