A Mamba-based Network for Semi-supervised Singing Melody Extraction Using Confidence Binary Regularization

Singing melody extraction (SME) is a key task in the field of music information retrieval. However, existing methods are facing several limitations: firstly, prior models use transformers to capture the contextual dependencies, which requires quadratic computation resulting in low efficiency in the inference stage. Secondly, prior works typically rely on frequencysupervised methods to estimate the fundamental frequency (f0), which ignores that the musical performance is actually based on notes. Thirdly, transformers typically require large amounts of labeled data to achieve optimal performances, but the SME task lacks of sufficient annotated data. To address these issues, in this paper, we propose a mamba-based network, called SpectMamba, for semi-supervised singing melody extraction using confidence binary regularization. In particular, we begin by introducing vision mamba to achieve computational linear complexity. Then, we propose a novel note-f0 decoder that allows the model to better mimic the musical performance. Further, to alleviate the scarcity of the labeled data, we introduce a confidence binary regularization (CBR) module to leverage the unlabeled data by maximizing the probability of the correct classes. The proposed method is evaluated on several public datasets and the conducted experiments demonstrate the effectiveness of our proposed method.
View on arXiv@article{he2025_2505.08681, title={ A Mamba-based Network for Semi-supervised Singing Melody Extraction Using Confidence Binary Regularization }, author={ Xiaoliang He and Kangjie Dong and Jingkai Cao and Shuai Yu and Wei Li and Yi Yu }, journal={arXiv preprint arXiv:2505.08681}, year={ 2025 } }