ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03039
  4. Cited By
Few-shot learning of neural networks from scratch by pseudo example
  optimization
v1v2v3 (latest)

Few-shot learning of neural networks from scratch by pseudo example optimization

8 February 2018
Akisato Kimura
Zoubin Ghahramani
Koh Takeuchi
Tomoharu Iwata
N. Ueda
ArXiv (abs)PDFHTML

Papers citing "Few-shot learning of neural networks from scratch by pseudo example optimization"

29 / 29 papers shown
Simple yet Effective Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization
Simple yet Effective Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
517
1
0
12 May 2025
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLMOffRL
611
39
0
19 Jun 2023
SLaM: Student-Label Mixing for Distillation with Unlabeled Examples
SLaM: Student-Label Mixing for Distillation with Unlabeled ExamplesNeural Information Processing Systems (NeurIPS), 2023
Vasilis Kontonis
Fotis Iliopoulos
Khoa Trinh
Cenk Baykal
Gaurav Menghani
Erik Vee
327
9
0
08 Feb 2023
Black-box Few-shot Knowledge Distillation
Black-box Few-shot Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2022
Dang Nguyen
Sunil R. Gupta
Kien Do
Svetha Venkatesh
223
17
0
25 Jul 2022
Holistic Approach to Measure Sample-level Adversarial Vulnerability and
  its Utility in Building Trustworthy Systems
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems
Gaurav Kumar Nayak
Ruchit Rawal
Rohit Lal
Himanshu Patil
Anirban Chakraborty
AAML
207
2
0
05 May 2022
DistPro: Searching A Fast Knowledge Distillation Process via Meta
  Optimization
DistPro: Searching A Fast Knowledge Distillation Process via Meta OptimizationEuropean Conference on Computer Vision (ECCV), 2022
XueQing Deng
Dawei Sun
Shawn D. Newsam
Peng Wang
189
10
0
12 Apr 2022
Distillation from heterogeneous unlabeled collections
Distillation from heterogeneous unlabeled collections
Jean-Michel Begon
Pierre Geurts
166
0
0
17 Jan 2022
Beyond Classification: Knowledge Distillation using Multi-Object
  Impressions
Beyond Classification: Knowledge Distillation using Multi-Object Impressions
Gaurav Kumar Nayak
Monish Keswani
Sharan Seshadri
Anirban Chakraborty
126
2
0
27 Oct 2021
Confidence Conditioned Knowledge Distillation
Confidence Conditioned Knowledge Distillation
Sourav Mishra
Suresh Sundaram
223
2
0
06 Jul 2021
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box ModelInternational Conference on Machine Learning (ICML), 2021
Zehao Wang
203
54
0
07 Jun 2021
Data-Free Knowledge Distillation with Soft Targeted Transfer Set
  Synthesis
Data-Free Knowledge Distillation with Soft Targeted Transfer Set SynthesisAAAI Conference on Artificial Intelligence (AAAI), 2021
Zehao Wang
165
34
0
10 Apr 2021
Knowledge Distillation By Sparse Representation Matching
Knowledge Distillation By Sparse Representation Matching
D. Tran
Moncef Gabbouj
Alexandros Iosifidis
206
0
0
31 Mar 2021
Self Regulated Learning Mechanism for Data Efficient Knowledge
  Distillation
Self Regulated Learning Mechanism for Data Efficient Knowledge DistillationIEEE International Joint Conference on Neural Network (IJCNN), 2021
Sourav Mishra
Suresh Sundaram
301
1
0
14 Feb 2021
Mining Data Impressions from Deep Models as Substitute for the
  Unavailable Training Data
Mining Data Impressions from Deep Models as Substitute for the Unavailable Training DataIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Gaurav Kumar Nayak
Konda Reddy Mopuri
Saksham Jain
Anirban Chakraborty
331
15
0
15 Jan 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware MixupPattern Recognition (Pattern Recognit.), 2020
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
330
46
0
17 Dec 2020
Generative Adversarial Simulator
Generative Adversarial SimulatorInternational Journal of Artificial Intelligence and Machine Learning (JAIML), 2020
Jonathan Raiman
GAN
83
0
0
23 Nov 2020
A Survey on Machine Learning from Few Samples
A Survey on Machine Learning from Few SamplesPattern Recognition (Pattern Recognit.), 2020
Jiang Lu
Pinghua Gong
Jieping Ye
Jianwei Zhang
Changshu Zhang
377
81
0
06 Sep 2020
Knowledge Distillation in Deep Learning and its Applications
Knowledge Distillation in Deep Learning and its ApplicationsPeerJ Computer Science (PeerJ Comput. Sci.), 2020
Abdolmaged Alkhulaifi
Fahad Alsahli
Irfan Ahmad
FedML
234
113
0
17 Jul 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
2.2K
4,015
0
09 Jun 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox ModelComputer Vision and Pattern Recognition (CVPR), 2020
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
192
51
0
31 Mar 2020
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a
  Trained Classifier
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained ClassifierAAAI Conference on Artificial Intelligence (AAAI), 2019
Sravanti Addepalli
Gaurav Kumar Nayak
Anirban Chakraborty
R. Venkatesh Babu
196
39
0
27 Dec 2019
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversionComputer Vision and Pattern Recognition (CVPR), 2019
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
624
680
0
18 Dec 2019
BEAN: Interpretable Representation Learning with Biologically-Enhanced
  Artificial Neuronal Assembly Regularization
BEAN: Interpretable Representation Learning with Biologically-Enhanced Artificial Neuronal Assembly RegularizationFrontiers in Neurorobotics (FN), 2019
Yuyang Gao
Giorgio Ascoli
Bo Pan
299
18
0
27 Sep 2019
Well-Read Students Learn Better: On the Importance of Pre-training
  Compact Models
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Iulia Turc
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
385
242
0
23 Aug 2019
Zero-shot Knowledge Transfer via Adversarial Belief Matching
Zero-shot Knowledge Transfer via Adversarial Belief MatchingNeural Information Processing Systems (NeurIPS), 2019
P. Micaelli
Amos Storkey
447
250
0
23 May 2019
Zero-Shot Knowledge Distillation in Deep Networks
Zero-Shot Knowledge Distillation in Deep NetworksInternational Conference on Machine Learning (ICML), 2019
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
378
259
0
20 May 2019
Few-Shot and Zero-Shot Learning for Historical Text Normalization
Few-Shot and Zero-Shot Learning for Historical Text Normalization
Marcel Bollmann
N. Korchagina
Anders Søgaard
AI4TSVLM
243
2
0
12 Mar 2019
Scalable Logo Recognition using Proxies
Scalable Logo Recognition using ProxiesIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2018
István Fehérvári
Srikar Appalaraju
187
46
0
19 Nov 2018
Generating Natural Adversarial Examples
Generating Natural Adversarial Examples
Zhengli Zhao
Dheeru Dua
Sameer Singh
GANAAML
834
653
0
31 Oct 2017
1
Page 1 of 1