ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.12297
  4. Cited By
Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical
  Encoder for Long-Form Document Matching

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching

26 April 2020
Liu Yang
Mingyang Zhang
Cheng Li
Michael Bendersky
Marc Najork
ArXivPDFHTML

Papers citing "Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching"

7 / 7 papers shown
Title
Attention over pre-trained Sentence Embeddings for Long Document
  Classification
Attention over pre-trained Sentence Embeddings for Long Document Classification
Amine Abdaoui
Sourav Dutta
6
1
0
18 Jul 2023
EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form
  Summarization in the Legal Domain
EUR-Lex-Sum: A Multi- and Cross-lingual Dataset for Long-form Summarization in the Legal Domain
Dennis Aumiller
Ashish Chouhan
Michael Gertz
ELM
AILaw
32
35
0
24 Oct 2022
An Exploration of Hierarchical Attention Transformers for Efficient Long
  Document Classification
An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification
Ilias Chalkidis
Xiang Dai
Manos Fergadiotis
Prodromos Malakasiotis
Desmond Elliott
30
33
0
11 Oct 2022
Machine Learning for Violence Risk Assessment Using Dutch Clinical Notes
Machine Learning for Violence Risk Assessment Using Dutch Clinical Notes
P. Mosteiro
Emil Rijcken
Kalliopi Zervanou
U. Kaymak
Floortje E. Scheepers
Marco Spruit
25
13
0
28 Apr 2022
WebFormer: The Web-page Transformer for Structure Information Extraction
WebFormer: The Web-page Transformer for Structure Information Extraction
Qifan Wang
Yi Fang
Anirudh Ravula
Fuli Feng
Xiaojun Quan
Dongfang Liu
ViT
141
65
0
01 Feb 2022
Overview of the TREC 2019 deep learning track
Overview of the TREC 2019 deep learning track
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
Daniel Fernando Campos
E. Voorhees
180
465
0
17 Mar 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
579
0
12 Mar 2020
1