ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.03781
105
0

Gaze-Assisted Human-Centric Domain Adaptation for Cardiac Ultrasound Image Segmentation

6 February 2025
Ruiyi Li
Yuting He
Rongjun Ge
C. Wang
Daoqiang Zhang
Yang Chen
Shuo Li
ArXivPDFHTML
Abstract

Domain adaptation (DA) for cardiac ultrasound image segmentation is clinically significant and valuable. However, previous domain adaptation methods are prone to be affected by the incomplete pseudo-label and low-quality target to source images. Human-centric domain adaptation has great advantages of human cognitive guidance to help model adapt to target domain and reduce reliance on labels. Doctor gaze trajectories contains a large amount of cross-domain human guidance. To leverage gaze information and human cognition for guiding domain adaptation, we propose gaze-assisted human-centric domain adaptation (GAHCDA), which reliably guides the domain adaptation of cardiac ultrasound images. GAHCDA includes following modules: (1) Gaze Augment Alignment (GAA): GAA enables the model to obtain human cognition general features to recognize segmentation target in different domain of cardiac ultrasound images like humans. (2) Gaze Balance Loss (GBL): GBL fused gaze heatmap with outputs which makes the segmentation result structurally closer to the target domain. The experimental results illustrate that our proposed framework is able to segment cardiac ultrasound images more effectively in the target domain than GAN-based methods and other self-train based methods, showing great potential in clinical application.

View on arXiv
@article{li2025_2502.03781,
  title={ Gaze-Assisted Human-Centric Domain Adaptation for Cardiac Ultrasound Image Segmentation },
  author={ Ruiyi Li and Yuting He and Rongjun Ge and Chong Wang and Daoqiang Zhang and Yang Chen and Shuo Li },
  journal={arXiv preprint arXiv:2502.03781},
  year={ 2025 }
}
Comments on this paper