ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.04199
  4. Cited By
LongGenBench: Long-context Generation Benchmark
v1v2v3 (latest)

LongGenBench: Long-context Generation Benchmark

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
5 October 2024
Xiang Liu
Peijie Dong
Xuming Hu
Xiaowen Chu
    RALM
ArXiv (abs)PDFHTMLHuggingFace (22 upvotes)

Papers citing "LongGenBench: Long-context Generation Benchmark"

10 / 10 papers shown
DiffAdapt: Difficulty-Adaptive Reasoning for Token-Efficient LLM Inference
DiffAdapt: Difficulty-Adaptive Reasoning for Token-Efficient LLM Inference
Xiang Liu
Xuming Hu
Xiaowen Chu
Eunsol Choi
LRM
145
0
0
22 Oct 2025
LongReasonArena: A Long Reasoning Benchmark for Large Language Models
LongReasonArena: A Long Reasoning Benchmark for Large Language Models
Jiayu Ding
Shuming Ma
Lei Cui
Nanning Zheng
Furu Wei
LRMELM
108
0
0
26 Aug 2025
StoryWriter: A Multi-Agent Framework for Long Story Generation
StoryWriter: A Multi-Agent Framework for Long Story Generation
Haotian Xia
Hao Peng
Yunjia Qi
Xiaozhi Wang
Bin Xu
Lei Hou
Juanzi Li
VGen
321
3
0
19 Jun 2025
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression
Peijie Dong
Zhenheng Tang
Xiang Liu
Lujun Li
Xiaowen Chu
Bo Li
450
7
0
26 May 2025
FlowKV: Enhancing Multi-Turn Conversational Coherence in LLMs via Isolated Key-Value Cache Management
FlowKV: Enhancing Multi-Turn Conversational Coherence in LLMs via Isolated Key-Value Cache Management
Xiang Liu
Hong Chen
Xuming Hu
Xiaowen Chu
293
1
0
21 May 2025
Context-Enhanced Contrastive Search for Improved LLM Text Generation
Context-Enhanced Contrastive Search for Improved LLM Text Generation
Jaydip Sen
Rohit Pandey
Hetvi Waghela
318
3
0
22 Apr 2025
Extract, Match, and Score: An Evaluation Paradigm for Long Question-context-answer Triplets in Financial Analysis
Extract, Match, and Score: An Evaluation Paradigm for Long Question-context-answer Triplets in Financial Analysis
Bo Hu
Han Yuan
Vlad Pandelea
Wuqiong Luo
Yingzhu Zhao
Zheng Ma
241
2
0
20 Mar 2025
Shifting Long-Context LLMs Research from Input to Output
Yuhao Wu
Yushi Bai
Zhiqing Hu
Shangqing Tu
Ming Shan Hee
Juanzi Li
Roy Ka-wei Lee
335
13
0
06 Mar 2025
Dialogue Without Limits: Constant-Sized KV Caches for Extended Responses in LLMs
Dialogue Without Limits: Constant-Sized KV Caches for Extended Responses in LLMs
Ravi Ghadia
Avinash Kumar
Gaurav Jain
Shiyang Chen
Poulami Das
343
8
0
02 Mar 2025
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Xiang Liu
Zhenheng Tang
Hong Chen
Peijie Dong
Zeyu Li
Xiuze Zhou
Bo Li
Xuming Hu
Xiaowen Chu
1.1K
14
0
04 Feb 2025
1