ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.15541
  4. Cited By
Transformers meet Stochastic Block Models: Attention with Data-Adaptive
  Sparsity and Cost

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

27 October 2022
Sungjun Cho
Seonwoo Min
Jinwoo Kim
Moontae Lee
Honglak Lee
Seunghoon Hong
ArXivPDFHTML

Papers citing "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"

5 / 5 papers shown
Title
CSA-Trans: Code Structure Aware Transformer for AST
CSA-Trans: Code Structure Aware Transformer for AST
Saeyoon Oh
Shin Yoo
16
1
0
07 Apr 2024
Curve Your Attention: Mixed-Curvature Transformers for Graph
  Representation Learning
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning
Sungjun Cho
Seunghyuk Cho
Sungwoo Park
Hankook Lee
Ho Hin Lee
Moontae Lee
12
6
0
08 Sep 2023
Neural-prior stochastic block model
Neural-prior stochastic block model
O. Duranthon
L. Zdeborová
19
3
0
17 Mar 2023
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1