24
1

GazeHTA: End-to-end Gaze Target Detection with Head-Target Association

Abstract

Precisely detecting which object a person is paying attention to is critical for human-robot interaction since it provides important cues for the next action from the human user. We propose an end-to-end approach for gaze target detection: predicting a head-target connection between individuals and the target image regions they are looking at. Most of the existing methods use independent components such as off-the-shelf head detectors or have problems in establishing associations between heads and gaze targets. In contrast, we investigate an end-to-end multi-person Gaze target detection framework with Heads and Targets Association (GazeHTA), which predicts multiple head-target instances based solely on input scene image. GazeHTA addresses challenges in gaze target detection by (1) leveraging a pre-trained diffusion model to extract scene features for rich semantic understanding, (2) re-injecting a head feature to enhance the head priors for improved head understanding, and (3) learning a connection map as the explicit visual associations between heads and gaze targets. Our extensive experimental results demonstrate that GazeHTA outperforms state-of-the-art gaze target detection methods and two adapted diffusion-based baselines on two standard datasets.

View on arXiv
@article{lin2025_2404.10718,
  title={ GazeHTA: End-to-end Gaze Target Detection with Head-Target Association },
  author={ Zhi-Yi Lin and Jouh Yeong Chew and Jan van Gemert and Xucong Zhang },
  journal={arXiv preprint arXiv:2404.10718},
  year={ 2025 }
}
Comments on this paper