ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.04451
  4. Cited By
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning
  Attacks

Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks

4 August 2023
Domenico Cotroneo
Cristina Improta
Pietro Liguori
R. Natella
    SILM
ArXivPDFHTML

Papers citing "Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks"

3 / 3 papers shown
Title
SandboxEval: Towards Securing Test Environment for Untrusted Code
SandboxEval: Towards Securing Test Environment for Untrusted Code
Rafiqul Rabin
Jesse Hostetler
Sean McGregor
Brett Weir
Nick Judd
ELM
39
0
0
27 Mar 2025
Poisoned Source Code Detection in Code Models
Poisoned Source Code Detection in Code Models
Ehab Ghannoum
Mohammad Ghafari
AAML
63
0
0
19 Feb 2025
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
86
50
0
13 Oct 2021
1