ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03039
  4. Cited By
Few-shot learning of neural networks from scratch by pseudo example
  optimization

Few-shot learning of neural networks from scratch by pseudo example optimization

8 February 2018
Akisato Kimura
Zoubin Ghahramani
Koh Takeuchi
Tomoharu Iwata
N. Ueda
ArXivPDFHTML

Papers citing "Few-shot learning of neural networks from scratch by pseudo example optimization"

9 / 9 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
57
0
0
12 May 2025
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
BEAN: Interpretable Representation Learning with Biologically-Enhanced
  Artificial Neuronal Assembly Regularization
BEAN: Interpretable Representation Learning with Biologically-Enhanced Artificial Neuronal Assembly Regularization
Yuyang Gao
Giorgio Ascoli
Liang Zhao
17
15
0
27 Sep 2019
Zero-shot Knowledge Transfer via Adversarial Belief Matching
Zero-shot Knowledge Transfer via Adversarial Belief Matching
P. Micaelli
Amos Storkey
17
228
0
23 May 2019
Zero-Shot Knowledge Distillation in Deep Networks
Zero-Shot Knowledge Distillation in Deep Networks
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
22
245
0
20 May 2019
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in
  Gaussian Process Hybrid Deep Networks
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
John Bradshaw
A. G. Matthews
Zoubin Ghahramani
BDL
AAML
68
171
0
08 Jul 2017
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,138
0
06 Jun 2015
1