ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.18164
25
3

Autoencoder-based General Purpose Representation Learning for Customer Embedding

28 February 2024
Jan Henrik Bertrand
J. P. Gargano
Laurent Mombaerts
Jonathan Taws
Jonathan Taws
    OOD
ArXivPDFHTML
Abstract

Recent advances in representation learning have successfully leveraged the underlying domain-specific structure of data across various fields. However, representing diverse and complex entities stored in tabular format within a latent space remains challenging. In this paper, we introduce DEEPCAE, a novel method for calculating the regularization term for multi-layer contractive autoencoders (CAEs). Additionally, we formalize a general-purpose entity embedding framework and use it to empirically show that DEEPCAE outperforms all other tested autoencoder variants in both reconstruction performance and downstream prediction performance. Notably, when compared to a stacked CAE across 13 datasets, DEEPCAE achieves a 34% improvement in reconstruction error.

View on arXiv
@article{bertrand2025_2402.18164,
  title={ Autoencoder-based General Purpose Representation Learning for Customer Embedding },
  author={ Jan Henrik Bertrand and David B. Hoffmann and Jacopo Pio Gargano and Laurent Mombaerts and Jonathan Taws },
  journal={arXiv preprint arXiv:2402.18164},
  year={ 2025 }
}
Comments on this paper