34
0

Lend a Hand: Semi Training-Free Cued Speech Recognition via MLLM-Driven Hand Modeling for Barrier-free Communication

Abstract

Cued Speech (CS) is an innovative visual communication system that integrates lip-reading with hand coding, designed to enhance effective communication for individuals with hearing impairments. Automatic CS Recognition (ACSR) refers to the AI-driven process of automatically recognizing hand gestures and lip movements in CS, converting them into text. However, previous work often relies on complex fusion modules and training techniques. Additionally, due to the limited amount of data in CS, the extraction of hand features, as well as recognition modeling, has consistently been subpar, significantly limiting the effectiveness of ACSR. To address this issue, we have innovatively explored the capabilities of Multimodal large language models (MLLMs) in recognizing hand shapes and positions in CS. More precisely, we propose a new Semi Training-Free paradigm for ACSR, named STF-ACSR. This approach leverages zero-shot recognition of hand movements through the Chinese CS Prompt Module (CCSPM), which equipped a training-free keyframe filtering and customized prompt engineering based on MLLM. It then integrates the recognition results into the lip-reading model using a Minimalist Fusion Module (MFM), effectively achieving superior recognition results. Furthermore, specifically for this study, we have supplemented the existing dataset of 6 normal hearing CS cuers by recording additional data from 8 cuers with hearing impairments, resulting in a new mixed dataset. Extensive experiments have demonstrated that STF-ACSR significantly outperforms previous methods on both normal and hearing-impaired data. Implementation and checkpoints are available atthis https URL.

View on arXiv
@article{huang2025_2503.21785,
  title={ Lend a Hand: Semi Training-Free Cued Speech Recognition via MLLM-Driven Hand Modeling for Barrier-free Communication },
  author={ Guanjie Huang and Danny Hin Kwok Tsang and Li Liu },
  journal={arXiv preprint arXiv:2503.21785},
  year={ 2025 }
}
Comments on this paper