Gradient Descent Converges Linearly for Logistic Regression on Separable
DataInternational Conference on Machine Learning (ICML), 2023 |
Boosting with Tempered Exponential MeasuresNeural Information Processing Systems (NeurIPS), 2023 |
Convex Risk Minimization and Conditional Probability EstimationAnnual Conference Computational Learning Theory (COLT), 2015 |
Parallel coordinate descent for the Adaboost problemInternational Conference on Machine Learning and Applications (ICMLA), 2013 |
Boosting with the Logistic Loss is ConsistentAnnual Conference Computational Learning Theory (COLT), 2013 |
Margins, Shrinkage, and BoostingInternational Conference on Machine Learning (ICML), 2013 |