ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10831
138
0

Efficiency Robustness of Dynamic Deep Learning Systems

12 June 2025
Ravishka Rathnasuriya
Tingxi Li
Zexin Xu
Zihe Song
Mirazul Haque
Simin Chen
Wei Yang
    AAMLSILM
ArXiv (abs)PDFHTML
Main:13 Pages
6 Figures
Bibliography:6 Pages
5 Tables
Appendix:1 Pages
Abstract

Deep Learning Systems (DLSs) are increasingly deployed in real-time applications, including those in resourceconstrained environments such as mobile and IoT devices. To address efficiency challenges, Dynamic Deep Learning Systems (DDLSs) adapt inference computation based on input complexity, reducing overhead. While this dynamic behavior improves efficiency, such behavior introduces new attack surfaces. In particular, efficiency adversarial attacks exploit these dynamic mechanisms to degrade system performance. This paper systematically explores efficiency robustness of DDLSs, presenting the first comprehensive taxonomy of efficiency attacks. We categorize these attacks based on three dynamic behaviors: (i) attacks on dynamic computations per inference, (ii) attacks on dynamic inference iterations, and (iii) attacks on dynamic output production for downstream tasks. Through an in-depth evaluation, we analyze adversarial strategies that target DDLSs efficiency and identify key challenges in securing these systems. In addition, we investigate existing defense mechanisms, demonstrating their limitations against increasingly popular efficiency attacks and the necessity for novel mitigation strategies to secure future adaptive DDLSs.

View on arXiv
@article{rathnasuriya2025_2506.10831,
  title={ Efficiency Robustness of Dynamic Deep Learning Systems },
  author={ Ravishka Rathnasuriya and Tingxi Li and Zexin Xu and Zihe Song and Mirazul Haque and Simin Chen and Wei Yang },
  journal={arXiv preprint arXiv:2506.10831},
  year={ 2025 }
}
Comments on this paper