Multi-modal Collaborative Optimization and Expansion Network for Event-assisted Single-eye Expression Recognition

In this paper, we proposed a Multi-modal Collaborative Optimization and Expansion Network (MCO-E Net), to use event modalities to resist challenges such as low light, high exposure, and high dynamic range in single-eye expression recognition tasks. The MCO-E Net introduces two innovative designs: Multi-modal Collaborative Optimization Mamba (MCO-Mamba) and Heterogeneous Collaborative and Expansion Mixture-of-Experts (HCE-MoE). MCO-Mamba, building upon Mamba, leverages dual-modal information to jointly optimize the model, facilitating collaborative interaction and fusion of modal semantics. This approach encourages the model to balance the learning of both modalities and harness their respective strengths. HCE-MoE, on the other hand, employs a dynamic routing mechanism to distribute structurally varied experts (deep, attention, and focal), fostering collaborative learning of complementary semantics. This heterogeneous architecture systematically integrates diverse feature extraction paradigms to comprehensively capture expression semantics. Extensive experiments demonstrate that our proposed network achieves competitive performance in the task of single-eye expression recognition, especially under poor lighting conditions.
View on arXiv@article{han2025_2505.12007, title={ Multi-modal Collaborative Optimization and Expansion Network for Event-assisted Single-eye Expression Recognition }, author={ Runduo Han and Xiuping Liu and Shangxuan Yi and Yi Zhang and Hongchen Tan }, journal={arXiv preprint arXiv:2505.12007}, year={ 2025 } }