ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.00613
53
0

A Unified Data Representation Learning for Non-parametric Two-sample Testing

30 November 2024
Xunye Tian
Liuhua Peng
Zhijian Zhou
M. Gong
Feng Liu
Feng Liu
ArXivPDFHTML
Abstract

Learning effective data representations has been crucial in non-parametric two-sample testing. Common approaches will first split data into training and test sets and then learn data representations purely on the training set. However, recent theoretical studies have shown that, as long as the sample indexes are not used during the learning process, the whole data can be used to learn data representations, meanwhile ensuring control of Type-I errors. The above fact motivates us to use the test set (but without sample indexes) to facilitate the data representation learning in the testing. To this end, we propose a representation-learning two-sample testing (RL-TST) framework. RL-TST first performs purely self-supervised representation learning on the entire dataset to capture inherent representations (IRs) that reflect the underlying data manifold. A discriminative model is then trained on these IRs to learn discriminative representations (DRs), enabling the framework to leverage both the rich structural information from IRs and the discriminative power of DRs. Extensive experiments demonstrate that RL-TST outperforms representative approaches by simultaneously using data manifold information in the test set and enhancing test power via finding the DRs with the training set.

View on arXiv
@article{tian2025_2412.00613,
  title={ A Unified Data Representation Learning for Non-parametric Two-sample Testing },
  author={ Xunye Tian and Liuhua Peng and Zhijian Zhou and Mingming Gong and Arthur Gretton and Feng Liu },
  journal={arXiv preprint arXiv:2412.00613},
  year={ 2025 }
}
Comments on this paper