Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.07556
Cited By
Truth-Aware Context Selection: Mitigating Hallucinations of Large Language Models Being Misled by Untruthful Contexts
12 March 2024
Tian Yu
Shaolei Zhang
Yang Feng
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Truth-Aware Context Selection: Mitigating Hallucinations of Large Language Models Being Misled by Untruthful Contexts"
4 / 4 papers shown
Title
Attention Hijackers: Detect and Disentangle Attention Hijacking in LVLMs for Hallucination Mitigation
Beitao Chen
Xinyu Lyu
Lianli Gao
Jingkuan Song
H. Shen
63
1
0
11 Mar 2025
SiLLM: Large Language Models for Simultaneous Machine Translation
Shoutao Guo
Shaolei Zhang
Zhengrui Ma
Min Zhang
Yang Feng
LRM
35
9
0
20 Feb 2024
Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts
Jian Xie
Kai Zhang
Jiangjie Chen
Renze Lou
Yu-Chuan Su
RALM
198
150
0
22 May 2023
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
245
671
0
06 Jan 2021
1