3

Mitigating Task-Order Sensitivity and Forgetting via Hierarchical Second-Order Consolidation

Protik Nag
Krishnan Raghavan
Vignesh Narayanan
Main:8 Pages
10 Figures
Bibliography:2 Pages
5 Tables
Appendix:11 Pages
Abstract

We introduce Hierarchical Taylor Series-based Continual Learning (HTCL)\textbf{Hierarchical Taylor Series-based Continual Learning (HTCL)}, a framework that couples fast local adaptation with conservative, second-order global consolidation to address the high variance introduced by random task ordering. To address task-order effects, HTCL identifies the best intra-group task sequence and integrates the resulting local updates through a Hessian-regularized Taylor expansion, yielding a consolidation step with theoretical guarantees. The approach naturally extends to an LL-level hierarchy, enabling multiscale knowledge integration in a manner not supported by conventional single-level CL systems. Across a wide range of datasets and replay and regularization baselines, HTCL acts as a model-agnostic consolidation layer that consistently enhances performance, yielding mean accuracy gains of 7%7\% to 25%25\% while reducing the standard deviation of final accuracy by up to 68%68\% across random task permutations.

View on arXiv
Comments on this paper