Title |
|---|
| Name | # Papers | # Citations |
|---|---|---|
| Date | Location | Event |
|---|---|---|
Dedicated to studies primarily investigating the causes, implications, and solutions for the phenomenon where language models generate plausible but incorrect or nonsensical outputs.
Title |
|---|
Title | |||
|---|---|---|---|
![]() When Bias Pretends to Be Truth: How Spurious Correlations Undermine Hallucination Detection in LLMs Shaowen Wang Yiqi Dong Ruinian Chang Tansheng Zhu Yuebo Sun Kaifeng Lyu Jian Li | |||
![]() When Evidence Contradicts: Toward Safer Retrieval-Augmented Generation in Healthcare Saeedeh Javadi Sara Mirabi Manan Gangar Bahadorreza Ofoghi | |||
![]() Stress Testing Factual Consistency Metrics for Long-Document Summarization Zain Muhammad Mujahid Dustin Wright Isabelle Augenstein | |||
![]() NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models Kyuho Lee Euntae Kim Jinwoo Choi Buru Chang | |||
![]() Injecting Falsehoods: Adversarial Man-in-the-Middle Attacks Undermining Factual Recall in LLMs Alina Fastowski Bardh Prenkaj Yuxiao Li Gjergji Kasneci | |||
![]() Stemming Hallucination in Language Models Using a Licensing Oracle Simeon Emanuilov Richard Ackermann | |||
![]() CPR: Mitigating Large Language Model Hallucinations with Curative Prompt RefinementIEEE International Conference on Systems, Man and Cybernetics (SMC), 2024 | |||
| Name (-) |
|---|
| Name (-) |
|---|
| Name (-) |
|---|
| Date | Location | Event | |
|---|---|---|---|
| No social events available | |||