Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.08514
Cited By
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
21 January 2022
Shuai Zhang
M. Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis"
7 / 7 papers shown
Title
Retraining with Predicted Hard Labels Provably Increases Model Accuracy
Rudrajit Das
Inderjit S Dhillon
Alessandro Epasto
Adel Javanmard
Jieming Mao
Vahab Mirrokni
Sujay Sanghavi
Peilin Zhong
46
1
0
17 Jun 2024
How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance
Hongkang Li
Shuai Zhang
Yihua Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
33
4
0
12 Mar 2024
Random Matrix Analysis to Balance between Supervised and Unsupervised Learning under the Low Density Separation Assumption
Vasilii Feofanov
Malik Tiomoko
Aladin Virmaux
31
5
0
20 Oct 2023
Density Ratio Estimation-based Bayesian Optimization with Semi-Supervised Learning
Jungtaek Kim
32
1
0
24 May 2023
DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation
Yuxi Feng
Xiaoyuan Yi
Xiting Wang
L. Lakshmanan
Xing Xie
DiffM
27
5
0
16 Dec 2022
Self-Training: A Survey
Massih-Reza Amini
Vasilii Feofanov
Loïc Pauletto
Lies Hadjadj
Emilie Devijver
Yury Maximov
SSL
26
100
0
24 Feb 2022
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
242
269
0
30 Sep 2019
1