ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06227
  4. Cited By
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding
  Practices with Insecure Suggestions from Poisoned AI Models

Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models

11 December 2023
Sanghak Oh
Kiho Lee
Seonhye Park
Doowon Kim
Hyoungshick Kim
    SILM
ArXivPDFHTML

Papers citing "Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models"

8 / 8 papers shown
Title
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness
Weisong Sun
Yuchen Chen
Mengzhe Yuan
Chunrong Fang
Zhenpeng Chen
Chong Wang
Yang Liu
Baowen Xu
Zhenyu Chen
AAML
34
1
0
20 Feb 2025
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
Ariful Haque
Sunzida Siddique
M. Rahman
Ahmed Rafi Hasan
Laxmi Rani Das
Marufa Kamal
Tasnim Masura
Kishor Datta Gupta
48
1
0
31 Jan 2025
Generalized Adversarial Code-Suggestions: Exploiting Contexts of
  LLM-based Code-Completion
Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion
Karl Rubel
Maximilian Noppel
Christian Wressnegger
AAML
SILM
15
0
0
14 Oct 2024
Understanding the Human-LLM Dynamic: A Literature Survey of LLM Use in
  Programming Tasks
Understanding the Human-LLM Dynamic: A Literature Survey of LLM Use in Programming Tasks
Deborah Etsenake
Meiyappan Nagappan
25
6
0
01 Oct 2024
FDI: Attack Neural Code Generation Systems through User Feedback Channel
FDI: Attack Neural Code Generation Systems through User Feedback Channel
Zhensu Sun
Xiaoning Du
Xiapu Luo
Fu Song
David Lo
Li Li
AAML
23
3
0
08 Aug 2024
DeCE: Deceptive Cross-Entropy Loss Designed for Defending Backdoor
  Attacks
DeCE: Deceptive Cross-Entropy Loss Designed for Defending Backdoor Attacks
Guang Yang
Yu Zhou
Xiang Chen
Xiangyu Zhang
Terry Yue Zhuo
David Lo
Taolue Chen
AAML
44
3
0
12 Jul 2024
Beyond Functional Correctness: Investigating Coding Style
  Inconsistencies in Large Language Models
Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models
Yanlin Wang
Tianyue Jiang
Mingwei Liu
Jiachi Chen
Zibin Zheng
29
7
0
29 Jun 2024
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in
  Large Language Models
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Pengzhou Cheng
Yidong Ding
Tianjie Ju
Zongru Wu
Wei Du
Ping Yi
Zhuosheng Zhang
Gongshen Liu
SILM
AAML
19
19
0
22 May 2024
1