ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.00748
  4. Cited By
Efficient Long-Text Understanding with Short-Text Models

Efficient Long-Text Understanding with Short-Text Models

1 August 2022
Maor Ivgi
Uri Shaham
Jonathan Berant
    VLM
ArXivPDFHTML

Papers citing "Efficient Long-Text Understanding with Short-Text Models"

12 / 12 papers shown
Title
MateICL: Mitigating Attention Dispersion in Large-Scale In-Context Learning
MateICL: Mitigating Attention Dispersion in Large-Scale In-Context Learning
Murtadha Ahmed
Wenbo
Liu yunfeng
39
0
0
02 May 2025
Cognitive Memory in Large Language Models
Cognitive Memory in Large Language Models
Lianlei Shan
Shixian Luo
Zezhou Zhu
Yu Yuan
Yong Wu
LLMAG
KELM
69
1
0
03 Apr 2025
Lost-in-Distance: Impact of Contextual Proximity on LLM Performance in Graph Tasks
Lost-in-Distance: Impact of Contextual Proximity on LLM Performance in Graph Tasks
Hamed Firooz
Maziar Sanjabi
Wenlong Jiang
Xiaoling Zhai
60
3
0
03 Jan 2025
LLM The Genius Paradox: A Linguistic and Math Expert's Struggle with Simple Word-based Counting Problems
LLM The Genius Paradox: A Linguistic and Math Expert's Struggle with Simple Word-based Counting Problems
Nan Xu
Xuezhe Ma
LRM
33
3
0
18 Oct 2024
When Can Transformers Count to n?
When Can Transformers Count to n?
Gilad Yehudai
Haim Kaplan
Asma Ghandeharioun
Mor Geva
Amir Globerson
32
10
0
21 Jul 2024
In-Context Learning with Long-Context Models: An In-Depth Exploration
In-Context Learning with Long-Context Models: An In-Depth Exploration
Amanda Bertsch
Maor Ivgi
Uri Alon
Jonathan Berant
Matthew R. Gormley
Matthew R. Gormley
Graham Neubig
ReLM
AIMat
81
65
0
30 Apr 2024
Focus Your Attention (with Adaptive IIR Filters)
Focus Your Attention (with Adaptive IIR Filters)
Shahar Lutati
Itamar Zimerman
Lior Wolf
24
9
0
24 May 2023
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
Jinchao Zhang
Shuyang Jiang
Jiangtao Feng
Lin Zheng
Lingpeng Kong
3DV
37
9
0
14 Oct 2022
Modeling Multi-hop Question Answering as Single Sequence Prediction
Modeling Multi-hop Question Answering as Single Sequence Prediction
Semih Yavuz
Kazuma Hashimoto
Yingbo Zhou
N. Keskar
Caiming Xiong
41
27
0
18 May 2022
ContractNLI: A Dataset for Document-level Natural Language Inference for
  Contracts
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
Yuta Koreeda
Christopher D. Manning
AILaw
87
96
0
05 Oct 2021
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
228
578
0
12 Mar 2020
1