ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04385
48
1

Scale-Invariant Adversarial Attack against Arbitrary-scale Super-resolution

6 March 2025
Yihao Huang
Xin Luo
Qing Guo
Felix Juefei-Xu
Xiaojun Jia
Weikai Miao
G. Pu
Yang Liu
ArXivPDFHTML
Abstract

The advent of local continuous image function (LIIF) has garnered significant attention for arbitrary-scale super-resolution (SR) techniques. However, while the vulnerabilities of fixed-scale SR have been assessed, the robustness of continuous representation-based arbitrary-scale SR against adversarial attacks remains an area warranting further exploration. The elaborately designed adversarial attacks for fixed-scale SR are scale-dependent, which will cause time-consuming and memory-consuming problems when applied to arbitrary-scale SR. To address this concern, we propose a simple yet effective ``scale-invariant'' SR adversarial attack method with good transferability, termed SIAGT. Specifically, we propose to construct resource-saving attacks by exploiting finite discrete points of continuous representation. In addition, we formulate a coordinate-dependent loss to enhance the cross-model transferability of the attack. The attack can significantly deteriorate the SR images while introducing imperceptible distortion to the targeted low-resolution (LR) images. Experiments carried out on three popular LIIF-based SR approaches and four classical SR datasets show remarkable attack performance and transferability of SIAGT.

View on arXiv
@article{huang2025_2503.04385,
  title={ Scale-Invariant Adversarial Attack against Arbitrary-scale Super-resolution },
  author={ Yihao Huang and Xin Luo and Qing Guo and Felix Juefei-Xu and Xiaojun Jia and Weikai Miao and Geguang Pu and Yang Liu },
  journal={arXiv preprint arXiv:2503.04385},
  year={ 2025 }
}
Comments on this paper