Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.04705
Cited By
Distilled Wasserstein Learning for Word Embedding and Topic Modeling
12 September 2018
Hongteng Xu
Wenlin Wang
W. Liu
Lawrence Carin
MedIm
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distilled Wasserstein Learning for Word Embedding and Topic Modeling"
8 / 8 papers shown
Title
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs
Nicolas Boizard
Kevin El Haddad
C´eline Hudelot
Pierre Colombo
68
14
0
28 Jan 2025
Linearized Wasserstein Barycenters: Synthesis, Analysis, Representational Capacity, and Applications
Matthew Werenski
Brendan Mallery
Shuchin Aeron
James M. Murphy
38
0
0
31 Oct 2024
Knowledge-Aware Bayesian Deep Topic Model
Dongsheng Wang
Yishi Xu
Miaoge Li
Zhibin Duan
Chaojie Wang
Bo Chen
Mingyuan Zhou
BDL
28
15
0
20 Sep 2022
Hybrid Gromov-Wasserstein Embedding for Capsule Learning
Pourya Shamsolmoali
Masoumeh Zareapoor
Swagatam Das
Eric Granger
Salvador García
MedIm
24
2
0
01 Sep 2022
Measure Estimation in the Barycentric Coding Model
Matthew Werenski
Ruijie Jiang
Abiy Tasissa
Shuchin Aeron
James M. Murphy
40
14
0
28 Jan 2022
Gaussian Hierarchical Latent Dirichlet Allocation: Bringing Polysemy Back
Takahiro Yoshida
Ryohei Hisano
T. Ohnishi
22
6
0
25 Feb 2020
Hierarchical Optimal Transport for Document Representation
Mikhail Yurochkin
Sebastian Claici
Edward Chien
F. Mirzazadeh
Justin Solomon
OT
11
90
0
26 Jun 2019
Wasserstein Barycenter Model Ensembling
Pierre L. Dognin
Igor Melnyk
Youssef Mroueh
Jerret Ross
Cicero Nogueira dos Santos
Tom Sercu
22
24
0
13 Feb 2019
1