ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.08039
  4. Cited By
Knowledge Overshadowing Causes Amalgamated Hallucination in Large
  Language Models

Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models

10 July 2024
Yuji Zhang
Sha Li
Jiateng Liu
Pengfei Yu
Yi R. Fung
Jing Li
Pengfei Yu
Heng Ji
ArXiv (abs)PDFHTML

Papers citing "Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models"

18 / 18 papers shown
Geometric-disentangelment Unlearning
Geometric-disentangelment Unlearning
Duo Zhou
Yuji Zhang
Tianxin Wei
Ruizhong Qiu
Ke Yang
...
Jingrui He
Hanghang Tong
Heng Ji
Huan Zhang
Huan Zhang
MU
351
0
0
21 Nov 2025
Data Value in the Age of Scaling: Understanding LLM Scaling Dynamics Under Real-Synthetic Data Mixtures
Data Value in the Age of Scaling: Understanding LLM Scaling Dynamics Under Real-Synthetic Data Mixtures
Haohui Wang
Jingyuan Qi
Jianpeng Chen
Jun Wu
Lifu Huang
...
Balaji Veeramani
Edward Bowen
Alison Hu
Tyler Cody
Dawei Zhou
172
1
0
17 Nov 2025
LIHE: Linguistic Instance-Split Hyperbolic-Euclidean Framework for Generalized Weakly-Supervised Referring Expression Comprehension
LIHE: Linguistic Instance-Split Hyperbolic-Euclidean Framework for Generalized Weakly-Supervised Referring Expression ComprehensionConference on Empirical Methods in Natural Language Processing (EMNLP), 2025
X. Shi
Silin Cheng
Sirui Zhao
Yunhan Jiang
Enhong Chen
Yang Liu
Sebastien Ourselin
164
1
0
15 Nov 2025
DePass: Unified Feature Attributing by Simple Decomposed Forward Pass
DePass: Unified Feature Attributing by Simple Decomposed Forward Pass
Xiangyu Hong
Che Jiang
Kai Tian
Biqing Qi
Youbang Sun
Ning Ding
Bowen Zhou
169
1
0
21 Oct 2025
Do LLMs Know They Are Being Tested? Evaluation Awareness and Incentive-Sensitive Failures in GPT-OSS-20B
Do LLMs Know They Are Being Tested? Evaluation Awareness and Incentive-Sensitive Failures in GPT-OSS-20B
Nisar Ahmed
Muhammad Imran Zaman
Gulshan Saleem
Ali Hassan
LRM
168
0
0
08 Oct 2025
From Superficial Outputs to Superficial Learning: Risks of Large Language Models in Education
From Superficial Outputs to Superficial Learning: Risks of Large Language Models in Education
Iris Delikoura
Yi.R
Fung
AI4Ed
447
5
0
26 Sep 2025
NIRVANA: Structured pruning reimagined for large language models compression
NIRVANA: Structured pruning reimagined for large language models compression
Mengting Ai
Tianxin Wei
Sirui Chen
Jingrui He
VLM
1.6K
3
0
17 Sep 2025
Exploring Causal Effect of Social Bias on Faithfulness Hallucinations in Large Language Models
Exploring Causal Effect of Social Bias on Faithfulness Hallucinations in Large Language Models
Zhenliang Zhang
Junzhe Zhang
Xinyu Hu
Huixuan Zhang
Xiaojun Wan
HILM
207
0
0
11 Aug 2025
Towards Mitigation of Hallucination for LLM-empowered Agents: Progressive Generalization Bound Exploration and Watchdog Monitor
Towards Mitigation of Hallucination for LLM-empowered Agents: Progressive Generalization Bound Exploration and Watchdog Monitor
Siyuan Liu
Wenjing Liu
Zhiwei Xu
Xin Wang
B. Chen
Tao Li
HILM
213
0
0
21 Jul 2025
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun
Y. Gai
Lijie Chen
Abhilasha Ravichander
Yejin Choi
Basel Alomair
HILM
470
11
0
17 Apr 2025
Amulet: ReAlignment During Test Time for Personalized Preference Adaptation of LLMs
Amulet: ReAlignment During Test Time for Personalized Preference Adaptation of LLMsInternational Conference on Learning Representations (ICLR), 2025
Zhaowei Zhang
Fengshuo Bai
Qizhi Chen
Chengdong Ma
Mingzhi Wang
Haoran Sun
Zilong Zheng
Wenbo Ding
662
21
0
26 Feb 2025
Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow Thinking
Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow ThinkingAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Xiaoxue Cheng
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
HILMLRM
381
0
0
02 Jan 2025
Continual Memorization of Factoids in Language Models
Continual Memorization of Factoids in Language Models
Howard Chen
Jiayi Geng
Adithya Bhaskar
Dan Friedman
Danqi Chen
KELM
378
1
0
11 Nov 2024
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
MLLM can see? Dynamic Correction Decoding for Hallucination MitigationInternational Conference on Learning Representations (ICLR), 2024
Chenxi Wang
Xiang Chen
Ningyu Zhang
Bozhong Tian
Haoming Xu
Shumin Deng
Ningyu Zhang
MLLMLRM
864
61
0
15 Oct 2024
ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains
ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple DomainsInternational Conference on Learning Representations (ICLR), 2024
Yein Park
Chanwoong Yoon
Jungwoo Park
Donghyeon Lee
Minbyul Jeong
Jaewoo Kang
KELM
554
3
0
13 Oct 2024
Integrative Decoding: Improve Factuality via Implicit Self-consistency
Integrative Decoding: Improve Factuality via Implicit Self-consistency
Yi Cheng
Xiao Liang
Yeyun Gong
Wen Xiao
Song Wang
...
Wenjie Li
Jian Jiao
Qi Chen
Peng Cheng
Wayne Xiong
HILM
580
7
0
02 Oct 2024
A Survey on Vision-Language-Action Models for Embodied AI
A Survey on Vision-Language-Action Models for Embodied AI
Yueen Ma
Zixing Song
Yuzheng Zhuang
Jianye Hao
Irwin King
LM&Ro
1.1K
192
0
23 May 2024
ADEPT: A DEbiasing PrompT Framework
ADEPT: A DEbiasing PrompT FrameworkAAAI Conference on Artificial Intelligence (AAAI), 2022
Ke Yang
Charles Yu
Yi R. Fung
Pengfei Yu
Heng Ji
384
35
0
10 Nov 2022
1
Page 1 of 1