Recent advances in representation learning have successfully leveraged the underlying domain-specific structure of data across various fields. However, representing diverse and complex entities stored in tabular format within a latent space remains challenging. In this paper, we introduce DEEPCAE, a novel method for calculating the regularization term for multi-layer contractive autoencoders (CAEs). Additionally, we formalize a general-purpose entity embedding framework and use it to empirically show that DEEPCAE outperforms all other tested autoencoder variants in both reconstruction performance and downstream prediction performance. Notably, when compared to a stacked CAE across 13 datasets, DEEPCAE achieves a 34% improvement in reconstruction error.
View on arXiv@article{bertrand2025_2402.18164, title={ Autoencoder-based General Purpose Representation Learning for Customer Embedding }, author={ Jan Henrik Bertrand and David B. Hoffmann and Jacopo Pio Gargano and Laurent Mombaerts and Jonathan Taws }, journal={arXiv preprint arXiv:2402.18164}, year={ 2025 } }