ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11072
  4. Cited By
MaPPing Your Model: Assessing the Impact of Adversarial Attacks on
  LLM-based Programming Assistants

MaPPing Your Model: Assessing the Impact of Adversarial Attacks on LLM-based Programming Assistants

12 July 2024
John Heibel
Daniel Lowd
    AAML
ArXivPDFHTML

Papers citing "MaPPing Your Model: Assessing the Impact of Adversarial Attacks on LLM-based Programming Assistants"

3 / 3 papers shown
Title
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
Ariful Haque
Sunzida Siddique
M. Rahman
Ahmed Rafi Hasan
Laxmi Rani Das
Marufa Kamal
Tasnim Masura
Kishor Datta Gupta
53
1
0
31 Jan 2025
The RealHumanEval: Evaluating Large Language Models' Abilities to
  Support Programmers
The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
Hussein Mozannar
Valerie Chen
Mohammed Alsobay
Subhro Das
Sebastian Zhao
Dennis L. Wei
Manish Nagireddy
P. Sattigeri
Ameet Talwalkar
David Sontag
ELM
46
18
0
03 Apr 2024
Poisoning Language Models During Instruction Tuning
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
92
124
0
01 May 2023
1