Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.12297
Cited By
Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching
26 April 2020
Liu Yang
Mingyang Zhang
Cheng Li
Michael Bendersky
Marc Najork
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching"
7 / 7 papers shown
Title
Attention over pre-trained Sentence Embeddings for Long Document Classification
Amine Abdaoui
Sourav Dutta
6
1
0
18 Jul 2023
EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain
Dennis Aumiller
Ashish Chouhan
Michael Gertz
ELM
AILaw
32
35
0
24 Oct 2022
An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification
Ilias Chalkidis
Xiang Dai
Manos Fergadiotis
Prodromos Malakasiotis
Desmond Elliott
32
33
0
11 Oct 2022
Machine Learning for Violence Risk Assessment Using Dutch Clinical Notes
P. Mosteiro
Emil Rijcken
Kalliopi Zervanou
U. Kaymak
Floortje E. Scheepers
Marco Spruit
25
13
0
28 Apr 2022
WebFormer: The Web-page Transformer for Structure Information Extraction
Qifan Wang
Yi Fang
Anirudh Ravula
Fuli Feng
Xiaojun Quan
Dongfang Liu
ViT
141
65
0
01 Feb 2022
Overview of the TREC 2019 deep learning track
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
Daniel Fernando Campos
E. Voorhees
180
465
0
17 Mar 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
240
579
0
12 Mar 2020
1