ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17838
  4. Cited By
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge
  Distillation

InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation

25 June 2024
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
ArXivPDFHTML

Papers citing "InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation"

8 / 8 papers shown
Title
Battle of the Backbones: A Large-Scale Comparison of Pretrained Models
  across Computer Vision Tasks
Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks
Micah Goldblum
Hossein Souri
Renkun Ni
Manli Shu
Viraj Prabhu
...
Adrien Bardes
Judy Hoffman
Ramalingam Chellappa
Andrew Gordon Wilson
Tom Goldstein
VLM
68
62
0
30 Oct 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
76
21
0
19 Jun 2023
CLIP-S$^4$: Language-Guided Self-Supervised Semantic Segmentation
CLIP-S4^44: Language-Guided Self-Supervised Semantic Segmentation
Wenbin He
Suphanut Jamonnak
Liangke Gou
Liu Ren
VLM
33
31
0
01 May 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
41
1
0
15 Mar 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Learning Student-Friendly Teacher Networks for Knowledge Distillation
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
113
99
0
12 Feb 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
472
0
12 Jun 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
267
402
0
09 Apr 2018
1