Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.04241
Cited By
Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
7 May 2023
Zhanpeng Zeng
Cole Hawkins
Min-Fong Hong
Aston Zhang
Nikolaos Pappas
Vikas Singh
Shuai Zheng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens"
7 / 7 papers shown
Title
KV-Distill: Nearly Lossless Learnable Context Compression for LLMs
Vivek Chari
Guanghui Qin
Benjamin Van Durme
VLM
61
0
0
13 Mar 2025
Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering
Haowei Du
Huishuai Zhang
Dongyan Zhao
HILM
14
0
0
27 Aug 2024
LLaVolta: Efficient Multi-modal Models via Stage-wise Visual Context Compression
Jieneng Chen
Luoxin Ye
Ju He
Zhao-Yang Wang
Daniel Khashabi
Alan Yuille
VLM
27
5
0
28 Jun 2024
Dodo: Dynamic Contextual Compression for Decoder-only LMs
Guanghui Qin
Corby Rosset
Ethan C. Chau
Nikhil Rao
Benjamin Van Durme
11
7
0
03 Oct 2023
ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
Yuta Koreeda
Christopher D. Manning
AILaw
87
96
0
05 Oct 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
245
1,977
0
31 Dec 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
1