16
0

Robust Understanding of Human-Robot Social Interactions through Multimodal Distillation

Abstract

The need for social robots and agents to interact and assist humans is growing steadily. To be able to successfully interact with humans, they need to understand and analyse socially interactive scenes from their (robot's) perspective. Works that model social situations between humans and agents are few; and even those existing ones are often too computationally intensive to be suitable for deployment in real time or on real world scenarios with limited available information. We propose a robust knowledge distillation framework that models social interactions through various multimodal cues, yet is robust against incomplete and noisy information during inference. Our teacher model is trained with multimodal input (body, face and hand gestures, gaze, raw images) that transfers knowledge to a student model that relies solely on body pose. Extensive experiments on two publicly available human-robot interaction datasets demonstrate that the our student model achieves an average accuracy gain of 14.75\% over relevant baselines on multiple downstream social understanding task even with up to 51\% of its input being corrupted. The student model is highly efficient: it is <1<1\% in size of the teacher model in terms of parameters and uses 0.5\sim 0.5\textperthousand~FLOPs of that in the teacher model. Our code will be made public during publication.

View on arXiv
@article{bian2025_2505.06278,
  title={ Robust Understanding of Human-Robot Social Interactions through Multimodal Distillation },
  author={ Tongfei Bian and Mathieu Chollet and Tanaya Guha },
  journal={arXiv preprint arXiv:2505.06278},
  year={ 2025 }
}
Comments on this paper