ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.19598
  4. Cited By
MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?

MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?

25 July 2025
Muntasir Wahed
Xiaona Zhou
Kiet A. Nguyen
Tianjiao Yu
Nirav Diwan
Gang Wang
Dilek Hakkani-Tür
Ismini Lourentzou
    AAML
ArXiv (abs)PDFHTMLGithub

Papers citing "MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?"

1 / 1 papers shown
When "Correct" Is Not Safe: Can We Trust Functionally Correct Patches Generated by Code Agents?
When "Correct" Is Not Safe: Can We Trust Functionally Correct Patches Generated by Code Agents?
Yibo Peng
James Song
Lei Li
Xinyu Yang
Mihai Christodorescu
Ravi Mangal
C. Păsăreanu
Haizhong Zheng
Beidi Chen
121
0
0
15 Oct 2025
1