ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.17029
  4. Cited By
Poison Attack and Defense on Deep Source Code Processing Models

Poison Attack and Defense on Deep Source Code Processing Models

31 October 2022
Jia Li
Zhuo Li
Huangzhao Zhang
Ge Li
Zhi Jin
Xing Hu
Xin Xia
    AAML
ArXiv (abs)PDFHTML

Papers citing "Poison Attack and Defense on Deep Source Code Processing Models"

11 / 11 papers shown
Signature in Code Backdoor Detection, how far are we?
Signature in Code Backdoor Detection, how far are we?
Quoc Hung Le
Thanh Le-Cong
Bach Le
Bowen Xu
AAML
76
0
0
15 Oct 2025
MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?
MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?
Muntasir Wahed
Xiaona Zhou
Kiet A. Nguyen
Tianjiao Yu
Nirav Diwan
Gang Wang
Dilek Hakkani-Tür
Ismini Lourentzou
AAML
149
1
0
25 Jul 2025
PDLRecover: Privacy-preserving Decentralized Model Recovery with Machine Unlearning
PDLRecover: Privacy-preserving Decentralized Model Recovery with Machine Unlearning
Xiangman Li
Xiaodong Wu
Jianbing Ni
Mohamed Mahmoud
Maazen Alsabaan
AAML
164
0
0
18 Jun 2025
A Systematic Review of Poisoning Attacks Against Large Language Models
A Systematic Review of Poisoning Attacks Against Large Language Models
Neil Fendley
Edward W. Staley
Joshua Carney
William Redman
Marie Chau
Nathan G. Drenkow
AAMLPILM
215
5
0
06 Jun 2025
FDI: Attack Neural Code Generation Systems through User Feedback Channel
FDI: Attack Neural Code Generation Systems through User Feedback ChannelInternational Symposium on Software Testing and Analysis (ISSTA), 2024
Zhensu Sun
Xiaoning Du
Xiapu Luo
Fu Song
David Lo
Li Li
AAML
175
3
0
08 Aug 2024
Poisoning Programs by Un-Repairing Code: Security Concerns of
  AI-generated Code
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code
Cristina Improta
SILMAAML
211
15
0
11 Mar 2024
PPM: Automated Generation of Diverse Programming Problems for
  Benchmarking Code Generation Models
PPM: Automated Generation of Diverse Programming Problems for Benchmarking Code Generation Models
Simin Chen
Xiaoning Feng
Xiao Han
Cong Liu
Wei Yang
170
11
0
28 Jan 2024
Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in
  Code Models
Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code ModelsIEEE Transactions on Software Engineering (TSE), 2023
Zhou Yang
Zhipeng Zhao
Chenyu Wang
Jieke Shi
Dongsum Kim
Donggyun Han
David Lo
SILMAAMLMIACV
262
20
0
02 Oct 2023
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning
  Attacks
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning AttacksIEEE International Conference on Program Comprehension (ICPC), 2023
Domenico Cotroneo
Cristina Improta
Pietro Liguori
R. Natella
SILM
423
55
0
04 Aug 2023
Multi-target Backdoor Attacks for Code Pre-trained Models
Multi-target Backdoor Attacks for Code Pre-trained ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Yanzhou Li
Shangqing Liu
Kangjie Chen
Xiaofei Xie
Tianwei Zhang
Yang Liu
AAMLSILM
208
32
0
14 Jun 2023
CodeEditor: Learning to Edit Source Code with Pre-trained Models
CodeEditor: Learning to Edit Source Code with Pre-trained ModelsACM Transactions on Software Engineering and Methodology (TOSEM), 2022
Jia Li
Ge Li
Zhuo Li
Zhi Jin
Xing Hu
Kechi Zhang
Zhiyi Fu
KELM
259
34
0
31 Oct 2022
1