ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06039
  4. Cited By
Benchmarking Prompt Engineering Techniques for Secure Code Generation with GPT Models

Benchmarking Prompt Engineering Techniques for Secure Code Generation with GPT Models

9 February 2025
Marc Bruni
Fabio Gabrielli
Mohammad Ghafari
Martin Kropp
    SILM
ArXiv (abs)PDFHTML

Papers citing "Benchmarking Prompt Engineering Techniques for Secure Code Generation with GPT Models"

4 / 4 papers shown
Title
When "Correct" Is Not Safe: Can We Trust Functionally Correct Patches Generated by Code Agents?
When "Correct" Is Not Safe: Can We Trust Functionally Correct Patches Generated by Code Agents?
Yibo Peng
James Song
Lei Li
Xinyu Yang
Mihai Christodorescu
Ravi Mangal
C. Păsăreanu
Haizhong Zheng
Beidi Chen
116
0
0
15 Oct 2025
SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models
SecReEvalBench: A Multi-turned Security Resilience Evaluation Benchmark for Large Language Models
Huining Cui
Wei Liu
AAMLELM
375
0
0
12 May 2025
Security Bug Report Prediction Within and Across Projects: A Comparative Study of BERT and Random Forest
Security Bug Report Prediction Within and Across Projects: A Comparative Study of BERT and Random ForestInternational Conference on Predictive Models in Software Engineering (PROMISE), 2025
Farnaz Soltaniani
Mohammad Ghafari
Mohammed Sayagh
133
0
0
28 Apr 2025
Poisoned Source Code Detection in Code Models
Poisoned Source Code Detection in Code ModelsJournal of Systems and Software (JSS), 2025
Ehab Ghannoum
Mohammad Ghafari
AAML
335
0
0
19 Feb 2025
1