Rethinking Pre-Training in Tabular Data: A Neighborhood Embedding Perspective

Pre-training is prevalent in deep learning for vision and text data, leveraging knowledge from other datasets to enhance downstream tasks. However, for tabular data, the inherent heterogeneity in attribute and label spaces across datasets complicates the learning of shareable knowledge. We propose Tabular data Pre-Training via Meta-representation (TabPTM), aiming to pre-train a general tabular model over diverse datasets. The core idea is to embed data instances into a shared feature space, where each instance is represented by its distance to a fixed number of nearest neighbors and their labels. This ''meta-representation'' transforms heterogeneous tasks into homogeneous local prediction problems, enabling the model to infer labels (or scores for each label) based on neighborhood information. As a result, the pre-trained TabPTM can be applied directly to new datasets, regardless of their diverse attributes and labels, without further fine-tuning. Extensive experiments on 101 datasets confirm TabPTM's effectiveness in both classification and regression tasks, with and without fine-tuning.
View on arXiv@article{ye2025_2311.00055, title={ Rethinking Pre-Training in Tabular Data: A Neighborhood Embedding Perspective }, author={ Han-Jia Ye and Qi-Le Zhou and Huai-Hong Yin and De-Chuan Zhan and Wei-Lun Chao }, journal={arXiv preprint arXiv:2311.00055}, year={ 2025 } }