Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.04770
Cited By
Born Again Neural Networks
12 May 2018
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Born Again Neural Networks"
50 / 167 papers shown
Title
Generate, Annotate, and Learn: NLP with Synthetic Text
Xuanli He
Islam Nassar
J. Kiros
Gholamreza Haffari
Mohammad Norouzi
31
51
0
11 Jun 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
22
34
0
03 May 2021
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
22
31
0
20 Apr 2021
Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
Emilio Parisotto
Ruslan Salakhutdinov
37
43
0
04 Apr 2021
A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification
Jong-Chyi Su
Zezhou Cheng
Subhransu Maji
23
57
0
01 Apr 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
25
46
0
26 Mar 2021
More Photos are All You Need: Semi-Supervised Learning for Fine-Grained Sketch Based Image Retrieval
A. Bhunia
Pinaki Nath Chowdhury
Aneeshan Sain
Yongxin Yang
Tao Xiang
Yi-Zhe Song
GAN
SSL
11
61
0
25 Mar 2021
Universal Representation Learning from Multiple Domains for Few-shot Classification
Weihong Li
Xialei Liu
Hakan Bilen
SSL
OOD
VLM
24
84
0
25 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
16
38
0
20 Mar 2021
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
45
21
0
09 Mar 2021
Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Abulikemu Abuduweili
Xingjian Li
Humphrey Shi
Chengzhong Xu
Dejing Dou
35
77
0
03 Mar 2021
Localization Distillation for Dense Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
W. Zuo
Qibin Hou
Ming-Ming Cheng
ObjD
98
115
0
24 Feb 2021
Essentials for Class Incremental Learning
Sudhanshu Mittal
Silvio Galesso
Thomas Brox
CLL
14
96
0
18 Feb 2021
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
Wangchunshu Zhou
Tao Ge
Canwen Xu
Ke Xu
Furu Wei
LRM
16
15
0
02 Jan 2021
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
Liang Ding
Longyue Wang
Xuebo Liu
Derek F. Wong
Dacheng Tao
Zhaopeng Tu
96
77
0
29 Dec 2020
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
28
354
0
17 Dec 2020
Weakly Supervised Label Smoothing
Gustavo Penha
C. Hauff
11
3
0
15 Dec 2020
Teach me to segment with mixed supervision: Confident students become masters
Jose Dolz
Christian Desrosiers
Ismail Ben Ayed
10
25
0
15 Dec 2020
DE-RRD: A Knowledge Distillation Framework for Recommender System
SeongKu Kang
Junyoung Hwang
Wonbin Kweon
Hwanjo Yu
15
79
0
08 Dec 2020
Deep Serial Number: Computational Watermarking for DNN Intellectual Property Protection
Ruixiang Tang
Mengnan Du
Xia Hu
30
3
0
17 Nov 2020
Meta Automatic Curriculum Learning
Rémy Portelas
Clément Romac
Katja Hofmann
Pierre-Yves Oudeyer
27
8
0
16 Nov 2020
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Guangda Ji
Zhanxing Zhu
53
42
0
20 Oct 2020
Lifelong Language Knowledge Distillation
Yung-Sung Chuang
Shang-Yu Su
Yun-Nung Chen
KELM
CLL
14
49
0
05 Oct 2020
Why have a Unified Predictive Uncertainty? Disentangling it using Deep Split Ensembles
U. Sarawgi
W. Zulfikar
Rishab Khincha
Pattie Maes
PER
UQCV
BDL
UD
16
7
0
25 Sep 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
20
110
0
18 Sep 2020
A Practical Incremental Method to Train Deep CTR Models
Yichao Wang
Huifeng Guo
Ruiming Tang
Zhirong Liu
Xiuqiang He
CLL
11
31
0
04 Sep 2020
Learning with Privileged Information for Efficient Image Super-Resolution
Wonkyung Lee
Junghyup Lee
Dohyung Kim
Bumsub Ham
33
134
0
15 Jul 2020
Towards Practical Lipreading with Distilled and Efficient Models
Pingchuan Ma
Brais Martínez
Stavros Petridis
M. Pantic
18
95
0
13 Jul 2020
Robust Re-Identification by Multiple Views Knowledge Distillation
Angelo Porrello
Luca Bergamini
Simone Calderara
24
65
0
08 Jul 2020
A Sequential Self Teaching Approach for Improving Generalization in Sound Event Recognition
Anurag Kumar
V. Ithapu
17
35
0
30 Jun 2020
Transient Non-Stationarity and Generalisation in Deep Reinforcement Learning
Maximilian Igl
Gregory Farquhar
Jelena Luketina
Wendelin Boehmer
Shimon Whiteson
18
83
0
10 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
21
18
0
06 Jun 2020
Syntactic Structure Distillation Pretraining For Bidirectional Encoders
A. Kuncoro
Lingpeng Kong
Daniel Fried
Dani Yogatama
Laura Rimell
Chris Dyer
Phil Blunsom
31
33
0
27 May 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
17
274
0
31 Mar 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
15
62
0
25 Mar 2020
Born-Again Tree Ensembles
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
54
53
0
24 Mar 2020
Meta Pseudo Labels
Hieu H. Pham
Zihang Dai
Qizhe Xie
Minh-Thang Luong
Quoc V. Le
VLM
253
656
0
23 Mar 2020
Unifying Specialist Image Embedding into Universal Image Embedding
Yang Feng
Futang Peng
Xu-Yao Zhang
Wei-wei Zhu
Shanfeng Zhang
Howard Zhou
Zhen Li
Tom Duerig
Shih-Fu Chang
Jiebo Luo
SSL
16
5
0
08 Mar 2020
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
19
226
0
13 Feb 2020
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
14
129
0
10 Feb 2020
Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification
Liuyu Xiang
Guiguang Ding
Jungong Han
11
279
0
06 Jan 2020
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
N. Karimi
S. Samavi
10
28
0
31 Dec 2019
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
23
67
0
18 Nov 2019
Label-similarity Curriculum Learning
Ürün Dogan
A. Deshmukh
Marcin Machura
Christian Igel
23
21
0
15 Nov 2019
Deep Model Transferability from Attribution Maps
Jie Song
Yixin Chen
Xinchao Wang
Chengchao Shen
Mingli Song
19
54
0
26 Sep 2019
Knowledge Transfer Graph for Deep Collaborative Learning
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
13
9
0
10 Sep 2019
Distilled Siamese Networks for Visual Tracking
Jianbing Shen
Yuanpei Liu
Xingping Dong
Xiankai Lu
F. Khan
S. Hoi
13
101
0
24 Jul 2019
DisCoRL: Continual Reinforcement Learning via Policy Distillation
Kalifou René Traoré
Hugo Caselles-Dupré
Timothée Lesort
Te Sun
Guanghang Cai
Natalia Díaz Rodríguez
David Filliat
OffRL
11
60
0
11 Jul 2019
BAM! Born-Again Multi-Task Networks for Natural Language Understanding
Kevin Clark
Minh-Thang Luong
Urvashi Khandelwal
Christopher D. Manning
Quoc V. Le
19
228
0
10 Jul 2019
Previous
1
2
3
4
Next