Large Language Models as Attribution Regularizers for Efficient Model Training
Main:16 Pages
6 Figures
Bibliography:6 Pages
8 Tables
Appendix:2 Pages
Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across diverse domains. However, effectively leveraging their vast knowledge for training smaller downstream models remains an open challenge, especially in domains like tabular data learning, where simpler models are often preferred due to interpretability and efficiency.
View on arXivComments on this paper
