ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07027
19
0

Using ML filters to help automated vulnerability repairs: when it helps and when it doesn't

9 April 2025
Maria Camporese
Fabio Massacci
ArXivPDFHTML
Abstract

[Context:] The acceptance of candidate patches in automated program repair has been typically based on testing oracles. Testing requires typically a costly process of building the application while ML models can be used to quickly classify patches, thus allowing more candidate patches to be generated in a positive feedback loop. [Problem:] If the model predictions are unreliable (as in vulnerability detection) they can hardly replace the more reliable oracles based on testing. [New Idea:] We propose to use an ML model as a preliminary filter of candidate patches which is put in front of a traditional filter based on testing. [Preliminary Results:] We identify some theoretical bounds on the precision and recall of the ML algorithm that makes such operation meaningful in practice. With these bounds and the results published in the literature, we calculate how fast some of state-of-the art vulnerability detectors must be to be more effective over a traditional AVR pipeline such as APR4Vuln based just on testing.

View on arXiv
@article{camporese2025_2504.07027,
  title={ Using ML filters to help automated vulnerability repairs: when it helps and when it doesn't },
  author={ Maria Camporese and Fabio Massacci },
  journal={arXiv preprint arXiv:2504.07027},
  year={ 2025 }
}
Comments on this paper