Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.02425
Cited By
MEAL: Multi-Model Ensemble via Adversarial Learning
6 December 2018
Zhiqiang Shen
Zhankui He
Xiangyang Xue
AAML
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MEAL: Multi-Model Ensemble via Adversarial Learning"
29 / 29 papers shown
Title
MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from Microwatts to Megawatts for Sustainable AI
Arya Tschand
Arun Tejusve Raghunath Rajan
S. Idgunji
Anirban Ghosh
J. Holleman
...
Rowan Taubitz
Sean Zhan
Scott Wasson
David Kanter
Vijay Janapa Reddi
62
3
0
15 Oct 2024
Learning Privacy-Preserving Student Networks via Discriminative-Generative Distillation
Shiming Ge
Bochao Liu
Pengju Wang
Yong Li
Dan Zeng
FedML
39
9
0
04 Sep 2024
MLink: Linking Black-Box Models from Multiple Domains for Collaborative Inference
Mu Yuan
Lan Zhang
Zimu Zheng
Yi-Nan Zhang
Xiang-Yang Li
22
2
0
28 Sep 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
24
7
0
02 Aug 2022
Locality Guidance for Improving Vision Transformers on Tiny Datasets
Kehan Li
Runyi Yu
Zhennan Wang
Li-ming Yuan
Guoli Song
Jie Chen
ViT
24
43
0
20 Jul 2022
Teach me how to Interpolate a Myriad of Embeddings
Shashanka Venkataramanan
Ewa Kijak
Laurent Amsaleg
Yannis Avrithis
40
2
0
29 Jun 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
25
10
0
29 Jun 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation
Zechun Liu
Kwang-Ting Cheng
Dong Huang
Eric P. Xing
Zhiqiang Shen
MQ
25
102
0
29 Nov 2021
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
21
2
0
29 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
32
17
0
09 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
24
1
0
01 Nov 2021
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
28
2
0
25 Oct 2021
Double Similarity Distillation for Semantic Image Segmentation
Yingchao Feng
Xian Sun
Wenhui Diao
Jihao Li
Xin Gao
18
62
0
19 Jul 2021
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Changlin Li
Tao Tang
Guangrun Wang
Jiefeng Peng
Bing Wang
Xiaodan Liang
Xiaojun Chang
ViT
46
105
0
23 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
16
38
0
20 Mar 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration
Zhiqiang Shen
Zechun Liu
Jie Qin
Lei Huang
Kwang-Ting Cheng
Marios Savvides
UQCV
SSL
MQ
248
22
0
17 Feb 2021
Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning
Zhaowei Cai
Avinash Ravichandran
Subhransu Maji
Charless C. Fowlkes
Z. Tu
Stefano Soatto
36
118
0
21 Jan 2021
Ensemble Knowledge Distillation for CTR Prediction
Jieming Zhu
Jinyang Liu
Weiqi Li
Jincai Lai
Xiuqiang He
Liang Chen
Zibin Zheng
31
55
0
08 Nov 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
20
110
0
18 Sep 2020
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Zhiqiang Shen
Marios Savvides
23
63
0
17 Sep 2020
Multiple Expert Brainstorming for Domain Adaptive Person Re-identification
Yunpeng Zhai
QiXiang Ye
Shijian Lu
Mengxi Jia
Rongrong Ji
Yonghong Tian
13
163
0
03 Jul 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,837
0
09 Jun 2020
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
20
128
0
05 Feb 2020
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
25
67
0
18 Nov 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
11
844
0
17 May 2019
Towards Instance-level Image-to-Image Translation
Zhiqiang Shen
Mingyang Huang
Jianping Shi
Xiangyang Xue
Thomas Huang
28
102
0
05 May 2019
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Image Generation from Scene Graphs
Justin Johnson
Agrim Gupta
Li Fei-Fei
GNN
223
815
0
04 Apr 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,660
0
05 Dec 2016
1