Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.11773
Cited By
Sinkformers: Transformers with Doubly Stochastic Attention
22 October 2021
Michael E. Sander
Pierre Ablin
Mathieu Blondel
Gabriel Peyré
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sinkformers: Transformers with Doubly Stochastic Attention"
10 / 60 papers shown
Title
Unsupervised Manifold Linearizing and Clustering
Tianjiao Ding
Shengbang Tong
Kwan Ho Ryan Chan
Xili Dai
Y. Ma
B. Haeffele
16
11
0
04 Jan 2023
Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow
Yuling Yan
Kaizheng Wang
Philippe Rigollet
44
20
0
04 Jan 2023
Regularized Optimal Transport Layers for Generalized Global Pooling Operations
Hongteng Xu
Minjie Cheng
36
4
0
13 Dec 2022
Designing Robust Transformers using Robust Kernel Density Estimation
Xing Han
Tongzheng Ren
T. Nguyen
Khai Nguyen
Joydeep Ghosh
Nhat Ho
21
6
0
11 Oct 2022
Sparsity-Constrained Optimal Transport
Tianlin Liu
J. Puigcerver
Mathieu Blondel
OT
21
21
0
30 Sep 2022
Rethinking Initialization of the Sinkhorn Algorithm
James Thornton
Marco Cuturi
OT
21
10
0
15 Jun 2022
Transformer with Fourier Integral Attentions
T. Nguyen
Minh Pham
Tam Nguyen
Khai Nguyen
Stanley J. Osher
Nhat Ho
17
4
0
01 Jun 2022
Do Residual Neural Networks discretize Neural Ordinary Differential Equations?
Michael E. Sander
Pierre Ablin
Gabriel Peyré
27
25
0
29 May 2022
Revisiting Global Pooling through the Lens of Optimal Transport
Minjie Cheng
Hongteng Xu
15
0
0
23 Jan 2022
FlowPool: Pooling Graph Representations with Wasserstein Gradient Flows
E. Simou
11
0
0
18 Dec 2021
Previous
1
2