ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.08925
  4. Cited By
A White-Box False Positive Adversarial Attack Method on Contrastive Loss
  Based Offline Handwritten Signature Verification Models

A White-Box False Positive Adversarial Attack Method on Contrastive Loss Based Offline Handwritten Signature Verification Models

17 August 2023
Zhongliang Guo
Weiye Li
Yifei Qian
Ognjen Arandjelovic
Lei Fang
    AAML
ArXivPDFHTML

Papers citing "A White-Box False Positive Adversarial Attack Method on Contrastive Loss Based Offline Handwritten Signature Verification Models"

9 / 9 papers shown
Title
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable
  Multi-Modal Attacks
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks
Xinxin Liu
Zhongliang Guo
Siyuan Huang
Chun Pong Lau
AAML
DiffM
26
0
0
17 Oct 2024
A Grey-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse
A Grey-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse
Zhongliang Guo
Lei Fang
Jingyu Lin
Yifei Qian
Shuai Zhao
Zeyu Wang
Zeyu Wang
Cunjian Chen
Ognjen Arandjelović
Chun Pong Lau
DiffM
AAML
40
6
0
20 Aug 2024
Threats and Defenses in Federated Learning Life Cycle: A Comprehensive
  Survey and Challenges
Threats and Defenses in Federated Learning Life Cycle: A Comprehensive Survey and Challenges
Yanli Li
Zhongliang Guo
Nan Yang
Huaming Chen
Dong Yuan
Weiping Ding
FedML
34
2
0
09 Jul 2024
Artwork Protection Against Neural Style Transfer Using Locally Adaptive
  Adversarial Color Attack
Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack
Zhongliang Guo
Junhao Dong
Yifei Qian
Kaixuan Wang
Weiye Li
Ziheng Guo
Yuheng Wang
Yanli Li
Ognjen Arandjelović
Lei Fang
AAML
25
8
0
18 Jan 2024
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for
  In-context Learning
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning
Shuai Zhao
Meihuizi Jia
Anh Tuan Luu
Fengjun Pan
Jinming Wen
AAML
29
35
0
11 Jan 2024
Semi-Supervised Crowd Counting with Contextual Modeling: Facilitating
  Holistic Understanding of Crowd Scenes
Semi-Supervised Crowd Counting with Contextual Modeling: Facilitating Holistic Understanding of Crowd Scenes
Yifei Qian
Xiaopeng Hong
Zhongliang Guo
Ognjen Arandjelović
Carl R. Donovan
29
8
0
16 Oct 2023
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented
  Reality Applications
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications
Carter Slocum
Yicheng Zhang
Erfan Shayegani
Pedram Zaree
Nael B. Abu-Ghazaleh
Jiasi Chen
30
6
0
17 Aug 2023
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in
  Language Models
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models
Shuai Zhao
Jinming Wen
Anh Tuan Luu
J. Zhao
Jie Fu
SILM
57
89
0
02 May 2023
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
275
5,833
0
08 Jul 2016
1