ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.01220
  4. Cited By
DarkRank: Accelerating Deep Metric Learning via Cross Sample
  Similarities Transfer

DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer

5 July 2017
Yuntao Chen
Naiyan Wang
Zhaoxiang Zhang
    FedML
ArXivPDFHTML

Papers citing "DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer"

30 / 30 papers shown
Title
BackSlash: Rate Constrained Optimized Training of Large Language Models
BackSlash: Rate Constrained Optimized Training of Large Language Models
Jun Wu
Jiangtao Wen
Yuxing Han
34
0
0
23 Apr 2025
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
67
2
0
03 Oct 2024
Choosing Wisely and Learning Deeply: Selective Cross-Modality
  Distillation via CLIP for Domain Generalization
Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
Jixuan Leng
Yijiang Li
Haohan Wang
VLM
31
0
0
26 Nov 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Cross Architecture Distillation for Face Recognition
Cross Architecture Distillation for Face Recognition
Weisong Zhao
Xiangyu Zhu
Zhixiang He
Xiaoyu Zhang
Zhen Lei
CVBM
15
6
0
26 Jun 2023
Grouped Knowledge Distillation for Deep Face Recognition
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
15
6
0
10 Apr 2023
Distillation from Heterogeneous Models for Top-K Recommendation
Distillation from Heterogeneous Models for Top-K Recommendation
SeongKu Kang
Wonbin Kweon
Dongha Lee
Jianxun Lian
Xing Xie
Hwanjo Yu
VLM
27
21
0
02 Mar 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
F. I. F. Richard Yu
Xianglong Liu
MQ
AAML
24
36
0
26 Jan 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
13
1
0
15 Jan 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Coded Residual Transform for Generalizable Deep Metric Learning
Coded Residual Transform for Generalizable Deep Metric Learning
Shichao Kan
Yixiong Liang
Min Li
Yigang Cen
Jianxin Wang
Z. He
34
3
0
09 Oct 2022
Evaluation-oriented Knowledge Distillation for Deep Face Recognition
Evaluation-oriented Knowledge Distillation for Deep Face Recognition
Y. Huang
Jiaxiang Wu
Xingkun Xu
Shouhong Ding
CVBM
15
32
0
06 Jun 2022
Hot-Refresh Model Upgrades with Regression-Alleviating Compatible
  Training in Image Retrieval
Hot-Refresh Model Upgrades with Regression-Alleviating Compatible Training in Image Retrieval
Binjie Zhang
Yixiao Ge
Yantao Shen
Yu Li
Chun Yuan
Xuyuan Xu
Yexin Wang
Ying Shan
VLM
25
7
0
24 Jan 2022
STURE: Spatial-Temporal Mutual Representation Learning for Robust Data
  Association in Online Multi-Object Tracking
STURE: Spatial-Temporal Mutual Representation Learning for Robust Data Association in Online Multi-Object Tracking
Haidong Wang
Zhiyong Li
Yaping Li
Ke Nai
Ming Wen
VOT
23
7
0
18 Jan 2022
Deep Spatially and Temporally Aware Similarity Computation for Road
  Network Constrained Trajectories
Deep Spatially and Temporally Aware Similarity Computation for Road Network Constrained Trajectories
Ziquan Fang
Yuntao Du
Xinjun Zhu
Lu Chen
Yunjun Gao
Christian S. Jensen
AI4TS
11
59
0
17 Dec 2021
Semantic Relation Preserving Knowledge Distillation for Image-to-Image
  Translation
Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation
Zeqi Li
R. Jiang
P. Aarabi
GAN
VLM
28
28
0
30 Apr 2021
Content-Aware GAN Compression
Content-Aware GAN Compression
Yuchen Liu
Zhixin Shu
Yijun Li
Zhe-nan Lin
Federico Perazzi
S. Kung
GAN
24
58
0
06 Apr 2021
Fast Video Salient Object Detection via Spatiotemporal Knowledge
  Distillation
Fast Video Salient Object Detection via Spatiotemporal Knowledge Distillation
Tang Yi
Li Yuan
Wenbin Zou
16
4
0
20 Oct 2020
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
13
40
0
04 Aug 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
Binary Neural Networks: A Survey
Binary Neural Networks: A Survey
Haotong Qin
Ruihao Gong
Xianglong Liu
Xiao Bai
Jingkuan Song
N. Sebe
MQ
45
457
0
31 Mar 2020
GAN Compression: Efficient Architectures for Interactive Conditional
  GANs
GAN Compression: Efficient Architectures for Interactive Conditional GANs
Muyang Li
Ji Lin
Yaoyao Ding
Zhijian Liu
Jun-Yan Zhu
Song Han
GAN
15
2
0
19 Mar 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
14
129
0
10 Feb 2020
Interpretation and Simplification of Deep Forest
Sangwon Kim
Mira Jeong
ByoungChul Ko
FAtt
11
8
0
14 Jan 2020
Towards Oracle Knowledge Distillation with Neural Architecture Search
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
28
43
0
29 Nov 2019
MobileFAN: Transferring Deep Hidden Representation for Face Alignment
MobileFAN: Transferring Deep Hidden Representation for Face Alignment
Yang Zhao
Yifan Liu
Chunhua Shen
Yongsheng Gao
Shengwu Xiong
CVBM
19
39
0
11 Aug 2019
Structured Knowledge Distillation for Dense Prediction
Structured Knowledge Distillation for Dense Prediction
Yifan Liu
Chris Liu
Jingdong Wang
Zhenbo Luo
27
575
0
11 Mar 2019
Factorized Distillation: Training Holistic Person Re-identification
  Model by Distilling an Ensemble of Partial ReID Models
Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models
Pengyuan Ren
Jianmin Li
25
9
0
20 Nov 2018
Ranking Distillation: Learning Compact Ranking Models With High
  Performance for Recommender System
Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System
Jiaxi Tang
Ke Wang
19
182
0
19 Sep 2018
Knowledge Distillation with Adversarial Samples Supporting Decision
  Boundary
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
AAML
19
146
0
15 May 2018
1