428
v1v2v3v4v5 (latest)

Discrepancies are Virtue: Weak-to-Strong Generalization through Lens of Intrinsic Dimension

Main:16 Pages
13 Figures
Bibliography:9 Pages
3 Tables
Appendix:25 Pages
Abstract

Weak-to-strong (W2S) generalization is a type of finetuning (FT) where a strong (large) student model is trained on pseudo-labels generated by a weak teacher. Surprisingly, W2S FT often outperforms the weak teacher. We seek to understand this phenomenon through the observation that FT often occurs in intrinsically low-dimensional spaces. Leveraging the low intrinsic dimensionality of FT, we analyze W2S in the ridgeless regression setting from a variance reduction perspective. For a strong student-weak teacher pair with sufficiently expressive low-dimensional feature subspaces Vs,Vw\mathcal{V}_s, \mathcal{V}_w, we provide an exact characterization of the variance that dominates the generalization error of W2S. This unveils a virtue of discrepancy between the strong and weak models in W2S: the variance of the weak teacher is inherited by the strong student in VsVw\mathcal{V}_s \cap \mathcal{V}_w, while reduced by a factor of dim(Vs)/N\mathrm{dim}(\mathcal{V}_s)/N in the subspace of discrepancy VwVs\mathcal{V}_w \setminus \mathcal{V}_s with NN pseudo-labels for W2S. Our analysis further casts light on the sample complexities and the scaling of performance gap recovery in W2S. The analysis is supported by experiments on synthetic regression problems, as well as real vision and NLP tasks.

View on arXiv
Comments on this paper