ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13257
55
0

Random Forest Autoencoders for Guided Representation Learning

18 February 2025
Adrien Aumon
Shuang Ni
Myriam Lizotte
Guy Wolf
Kevin R. Moon
Jake S. Rhodes
ArXivPDFHTML
Abstract

Decades of research have produced robust methods for unsupervised data visualization, yet supervised visualization\unicodex2013\unicode{x2013}\unicodex2013where expert labels guide representations\unicodex2013\unicode{x2013}\unicodex2013remains underexplored, as most supervised approaches prioritize classification over visualization. Recently, RF-PHATE, a diffusion-based manifold learning method leveraging random forests and information geometry, marked significant progress in supervised visualization. However, its lack of an explicit mapping function limits scalability and prevents application to unseen data, posing challenges for large datasets and label-scarce scenarios. To overcome these limitations, we introduce Random Forest Autoencoders (RF-AE), a neural network-based framework for out-of-sample kernel extension that combines the flexibility of autoencoders with the supervised learning strengths of random forests and the geometry captured by RF-PHATE. RF-AE enables efficient out-of-sample supervised visualization and outperforms existing methods, including RF-PHATE's standard kernel extension, in both accuracy and interpretability. Additionally, RF-AE is robust to the choice of hyper-parameters and generalizes to any kernel-based dimensionality reduction method.

View on arXiv
@article{aumon2025_2502.13257,
  title={ Random Forest Autoencoders for Guided Representation Learning },
  author={ Adrien Aumon and Shuang Ni and Myriam Lizotte and Guy Wolf and Kevin R. Moon and Jake S. Rhodes },
  journal={arXiv preprint arXiv:2502.13257},
  year={ 2025 }
}
Comments on this paper