ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.11264
21
1

DeepSelective: Feature Gating and Representation Matching for Interpretable Clinical Prediction

15 April 2025
Ruochi Zhang
Qian Yang
Xiaoyang Wang
Haoran Wu
Qiong Zhou
Yu Wang
Kewei Li
Y. Wang
Yusi Fan
J. Zhang
Lan Huang
Chang Liu
Fengfeng Zhou
    OOD
ArXivPDFHTML
Abstract

The rapid accumulation of Electronic Health Records (EHRs) has transformed healthcare by providing valuable data that enhance clinical predictions and diagnoses. While conventional machine learning models have proven effective, they often lack robust representation learning and depend heavily on expert-crafted features. Although deep learning offers powerful solutions, it is often criticized for its lack of interpretability. To address these challenges, we propose DeepSelective, a novel end to end deep learning framework for predicting patient prognosis using EHR data, with a strong emphasis on enhancing model interpretability. DeepSelective combines data compression techniques with an innovative feature selection approach, integrating custom-designed modules that work together to improve both accuracy and interpretability. Our experiments demonstrate that DeepSelective not only enhances predictive accuracy but also significantly improves interpretability, making it a valuable tool for clinical decision-making. The source code is freely available atthis http URL.

View on arXiv
@article{zhang2025_2504.11264,
  title={ DeepSelective: Feature Gating and Representation Matching for Interpretable Clinical Prediction },
  author={ Ruochi Zhang and Qian Yang and Xiaoyang Wang and Haoran Wu and Qiong Zhou and Yu Wang and Kewei Li and Yueying Wang and Yusi Fan and Jiale Zhang and Lan Huang and Chang Liu and Fengfeng Zhou },
  journal={arXiv preprint arXiv:2504.11264},
  year={ 2025 }
}
Comments on this paper