Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.20550
Cited By
LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation
20 January 2025
Ziyao Zhang
Yanlin Wang
Chong Wang
Jiachi Chen
Zibin Zheng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation"
5 / 5 papers shown
Title
Hallucination by Code Generation LLMs: Taxonomy, Benchmarks, Mitigation, and Challenges
Yunseo Lee
John Youngeun Song
Dongsun Kim
Jindae Kim
Mijung Kim
Jaechang Nam
HILM
LRM
23
0
0
29 Apr 2025
Automated Factual Benchmarking for In-Car Conversational Systems using Large Language Models
Rafael Giebisch
Ken E. Friedl
Lev Sorokin
Andrea Stocco
HILM
35
0
0
01 Apr 2025
Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval
Jiarong Wu
Songqiang Chen
Jialun Cao
Hau Ching Lo
S. Cheung
48
0
0
26 Feb 2025
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
Ariful Haque
Sunzida Siddique
M. Rahman
Ahmed Rafi Hasan
Laxmi Rani Das
Marufa Kamal
Tasnim Masura
Kishor Datta Gupta
42
1
0
31 Jan 2025
A Deep Dive Into Large Language Model Code Generation Mistakes: What and Why?
QiHong Chen
Jiawei Li
Jiecheng Deng
Jiachen Yu
Justin Tian Jin Chen
Iftekhar Ahmed
32
0
0
03 Nov 2024
1