ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.02424
  4. Cited By
Backdoor Learning on Sequence to Sequence Models

Backdoor Learning on Sequence to Sequence Models

3 May 2023
Lichang Chen
Minhao Cheng
Heng-Chiao Huang
    SILM
ArXivPDFHTML

Papers citing "Backdoor Learning on Sequence to Sequence Models"

14 / 14 papers shown
Title
CROW: Eliminating Backdoors from Large Language Models via Internal
  Consistency Regularization
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
59
3
0
18 Nov 2024
BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers
BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers
Jiaqi Xue
Qian Lou
Mengxin Zheng
21
1
0
23 Oct 2024
PoisonBench: Assessing Large Language Model Vulnerability to Data
  Poisoning
PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning
Tingchen Fu
Mrinank Sharma
Philip H. S. Torr
Shay B. Cohen
David M. Krueger
Fazl Barez
AAML
29
0
0
11 Oct 2024
A Trembling House of Cards? Mapping Adversarial Attacks against Language
  Agents
A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents
Lingbo Mo
Zeyi Liao
Boyuan Zheng
Yu-Chuan Su
Chaowei Xiao
Huan Sun
AAML
LLMAG
28
14
0
15 Feb 2024
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning
Meng Zheng
Jiaqi Xue
Xun Chen
YanShan Wang
Qian Lou
Lei Jiang
AAML
12
6
0
16 Dec 2023
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective
Ganghua Wang
Xun Xian
Jayanth Srinivasa
Ashish Kundu
Xuan Bi
Mingyi Hong
Jie Ding
19
1
0
16 Oct 2023
Privacy in Large Language Models: Attacks, Defenses and Future
  Directions
Privacy in Large Language Models: Attacks, Defenses and Future Directions
Haoran Li
Yulin Chen
Jinglong Luo
Yan Kang
Xiaojin Zhang
Qi Hu
Chunkit Chan
Yangqiu Song
PILM
33
39
0
16 Oct 2023
Backdoor Attacks and Countermeasures in Natural Language Processing
  Models: A Comprehensive Security Review
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review
Pengzhou Cheng
Zongru Wu
Wei Du
Haodong Zhao
Wei Lu
Gongshen Liu
SILM
AAML
10
15
0
12 Sep 2023
A Comprehensive Overview of Backdoor Attacks in Large Language Models
  within Communication Networks
A Comprehensive Overview of Backdoor Attacks in Large Language Models within Communication Networks
Haomiao Yang
Kunlan Xiang
Mengyu Ge
Hongwei Li
Rongxing Lu
Shui Yu
SILM
21
42
0
28 Aug 2023
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt
  Injection
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
Jun Yan
Vikas Yadav
Shiyang Li
Lichang Chen
Zheng Tang
Hai Wang
Vijay Srinivasan
Xiang Ren
Hongxia Jin
SILM
13
73
0
31 Jul 2023
AlpaGasus: Training A Better Alpaca with Fewer Data
AlpaGasus: Training A Better Alpaca with Fewer Data
Lichang Chen
Shiyang Li
Jun Yan
Hai Wang
Kalpa Gunaratna
...
Zheng Tang
Vijay Srinivasan
Tianyi Zhou
Heng-Chiao Huang
Hongxia Jin
ALM
39
0
0
17 Jul 2023
InstructZero: Efficient Instruction Optimization for Black-Box Large
  Language Models
InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models
Lichang Chen
Jiuhai Chen
Tom Goldstein
Heng-Chiao Huang
Tianyi Zhou
15
42
0
05 Jun 2023
BITE: Textual Backdoor Attacks with Iterative Trigger Injection
BITE: Textual Backdoor Attacks with Iterative Trigger Injection
Jun Yan
Vansh Gupta
Xiang Ren
SILM
15
44
0
25 May 2022
Teaching Machines to Read and Comprehend
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
167
3,504
0
10 Jun 2015
1