Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.10728
Cited By
Distilled Neural Networks for Efficient Learning to Rank
22 February 2022
F. M. Nardini
Cosimo Rulli
Salvatore Trani
Rossano Venturini
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distilled Neural Networks for Efficient Learning to Rank"
7 / 7 papers shown
Title
A Teacher-Free Graph Knowledge Distillation Framework with Dual Self-Distillation
Lirong Wu
Haitao Lin
Zhangyang Gao
Guojiang Zhao
Stan Z. Li
38
8
0
06 Mar 2024
Post-hoc Selection of Pareto-Optimal Solutions in Search and Recommendation
Vincenzo Paparella
V. W. Anelli
F. M. Nardini
R. Perego
T. D. Noia
16
8
0
21 Jun 2023
Neural Network Compression using Binarization and Few Full-Precision Weights
F. M. Nardini
Cosimo Rulli
Salvatore Trani
Rossano Venturini
MQ
19
1
0
15 Jun 2023
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
31
50
0
23 Jul 2022
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
239
643
0
21 Apr 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1