ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.03704
  4. Cited By
Deep Incremental Boosting

Deep Incremental Boosting

Global Conference on Artificial Intelligence (GAI), 2016
11 August 2017
Alan Mosca
G. D. Magoulas
    FedMLODL
ArXiv (abs)PDFHTML

Papers citing "Deep Incremental Boosting"

11 / 11 papers shown
End-to-End Training Induces Information Bottleneck through Layer-Role
  Differentiation: A Comparative Analysis with Layer-wise Training
End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training
Keitaro Sakamoto
Issei Sato
355
10
0
14 Feb 2024
Go beyond End-to-End Training: Boosting Greedy Local Learning with
  Context Supply
Go beyond End-to-End Training: Boosting Greedy Local Learning with Context SupplyIEEE Transactions on Artificial Intelligence (IEEE TAI), 2023
Chengting Yu
Fengzhao Zhang
Hanzhi Ma
Aili Wang
Er-ping Li
227
1
0
12 Dec 2023
Efficient Diversity-Driven Ensemble for Deep Neural Networks
Efficient Diversity-Driven Ensemble for Deep Neural NetworksIEEE International Conference on Data Engineering (ICDE), 2020
Wentao Zhang
Jiawei Jiang
Yingxia Shao
Tengjiao Wang
166
23
0
26 Dec 2021
To Boost or not to Boost: On the Limits of Boosted Neural Networks
To Boost or not to Boost: On the Limits of Boosted Neural Networks
Sai Saketh Rambhatla
Michael J. Jones
Rama Chellappa
97
1
0
28 Jul 2021
Sparsely ensembled convolutional neural network classifiers via
  reinforcement learning
Sparsely ensembled convolutional neural network classifiers via reinforcement learningInternational Conference on Machine Learning Technologies (ICMLT), 2021
R. Malashin
123
3
0
07 Feb 2021
Revisiting Locally Supervised Learning: an Alternative to End-to-end
  Training
Revisiting Locally Supervised Learning: an Alternative to End-to-end TrainingInternational Conference on Learning Representations (ICLR), 2021
Yulin Wang
Zanlin Ni
Shiji Song
Le Yang
Gao Huang
165
96
0
26 Jan 2021
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via
  Accelerated Downsampling
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling
Wenchi Ma
Miao Yu
Kaidong Li
Guanghui Wang
254
6
0
15 Oct 2020
Iterative Boosting Deep Neural Networks for Predicting Click-Through
  Rate
Iterative Boosting Deep Neural Networks for Predicting Click-Through Rate
Amit Livne
Roy Dor
Eyal Mazuz
Tamar Didi
Bracha Shapira
Lior Rokach
95
4
0
26 Jul 2020
A Failure of Aspect Sentiment Classifiers and an Adaptive Re-weighting
  Solution
A Failure of Aspect Sentiment Classifiers and an Adaptive Re-weighting Solution
Hu Xu
Bing-Quan Liu
Lei Shu
Philip S. Yu
167
7
0
04 Nov 2019
Effects of Depth, Width, and Initialization: A Convergence Analysis of
  Layer-wise Training for Deep Linear Neural Networks
Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural NetworksAnalysis and Applications (Anal. Appl.), 2019
Yeonjong Shin
308
13
0
14 Oct 2019
Greedy Layerwise Learning Can Scale to ImageNet
Greedy Layerwise Learning Can Scale to ImageNet
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
427
201
0
29 Dec 2018
1
Page 1 of 1