Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.01772
Cited By
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem
4 January 2023
Peiwang Tang
Xianchao Zhang
AI4TS
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem"
5 / 5 papers shown
Title
Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality
Xiang Li
Wenhai Wang
Lingfeng Yang
Jian Yang
102
73
0
20 May 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
260
7,434
0
11 Nov 2021
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
Haoyi Zhou
Shanghang Zhang
J. Peng
Shuai Zhang
Jianxin Li
Hui Xiong
Wan Zhang
AI4TS
167
3,873
0
14 Dec 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
251
2,012
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
579
0
12 Mar 2020
1