ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.14673
  4. Cited By
Insights into LLM Long-Context Failures: When Transformers Know but
  Don't Tell

Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell

20 June 2024
Taiming Lu
Muhan Gao
Kuai Yu
Adam Byerly
Daniel Khashabi
ArXivPDFHTML

Papers citing "Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell"

10 / 10 papers shown
Title
Reconstructing Context: Evaluating Advanced Chunking Strategies for Retrieval-Augmented Generation
Reconstructing Context: Evaluating Advanced Chunking Strategies for Retrieval-Augmented Generation
Carlo Merola
Jaspinder Singh
RALM
55
0
0
28 Apr 2025
Savaal: Scalable Concept-Driven Question Generation to Enhance Human Learning
Savaal: Scalable Concept-Driven Question Generation to Enhance Human Learning
Kimia Noorbakhsh
Joseph Chandler
Pantea Karimi
M. Alizadeh
H. Balakrishnan
LRM
44
1
0
18 Feb 2025
Membership Inference Attack against Long-Context Large Language Models
Zixiong Wang
Gaoyang Liu
Yang Yang
Chen Wang
76
1
0
18 Nov 2024
A Deep Dive Into Large Language Model Code Generation Mistakes: What and Why?
A Deep Dive Into Large Language Model Code Generation Mistakes: What and Why?
QiHong Chen
Jiawei Li
Jiecheng Deng
Jiachen Yu
Justin Tian Jin Chen
Iftekhar Ahmed
42
0
0
03 Nov 2024
FACT: Examining the Effectiveness of Iterative Context Rewriting for
  Multi-fact Retrieval
FACT: Examining the Effectiveness of Iterative Context Rewriting for Multi-fact Retrieval
Jinlin Wang
Suyuchen Wang
Ziwen Xia
Sirui Hong
Yun Zhu
Bang Liu
Chenglin Wu
KELM
ReLM
HILM
RALM
LRM
21
0
0
28 Oct 2024
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Chenxi Wang
Xiang Chen
N. Zhang
Bozhong Tian
Haoming Xu
Shumin Deng
H. Chen
MLLM
LRM
26
4
0
15 Oct 2024
Pay Attention to What Matters
Pay Attention to What Matters
Pedro Luiz Silva
Antonio De Domenico
Ali Maatouk
Fadhel Ayed
ALM
22
0
0
19 Sep 2024
Cognitive Dissonance: Why Do Language Model Outputs Disagree with
  Internal Representations of Truthfulness?
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
Kevin Liu
Stephen Casper
Dylan Hadfield-Menell
Jacob Andreas
HILM
52
35
0
27 Nov 2023
The Geometry of Truth: Emergent Linear Structure in Large Language Model
  Representations of True/False Datasets
The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets
Samuel Marks
Max Tegmark
HILM
91
164
0
10 Oct 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
210
297
0
26 Apr 2023
1