ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24654
18
0

Black-box Adversarial Attacks on CNN-based SLAM Algorithms

30 May 2025
M. Gkeka
Bowen Sun
Evgenia Smirni
C. Antonopoulos
S. Lalis
Nikolaos Bellas
    AAML
ArXiv (abs)PDFHTML
Main:7 Pages
10 Figures
Bibliography:2 Pages
4 Tables
Abstract

Continuous advancements in deep learning have led to significant progress in feature detection, resulting in enhanced accuracy in tasks like Simultaneous Localization and Mapping (SLAM). Nevertheless, the vulnerability of deep neural networks to adversarial attacks remains a challenge for their reliable deployment in applications, such as navigation of autonomous agents. Even though CNN-based SLAM algorithms are a growing area of research there is a notable absence of a comprehensive presentation and examination of adversarial attacks targeting CNN-based feature detectors, as part of a SLAM system. Our work introduces black-box adversarial perturbations applied to the RGB images fed into the GCN-SLAM algorithm. Our findings on the TUM dataset [30] reveal that even attacks of moderate scale can lead to tracking failure in as many as 76% of the frames. Moreover, our experiments highlight the catastrophic impact of attacking depth instead of RGB input images on the SLAM system.

View on arXiv
@article{gkeka2025_2505.24654,
  title={ Black-box Adversarial Attacks on CNN-based SLAM Algorithms },
  author={ Maria Rafaela Gkeka and Bowen Sun and Evgenia Smirni and Christos D. Antonopoulos and Spyros Lalis and Nikolaos Bellas },
  journal={arXiv preprint arXiv:2505.24654},
  year={ 2025 }
}
Comments on this paper