Evaluating Self-supervised Speech Models on a Taiwanese Hokkien Corpus
Yi-Hui Chou
Kalvin Chang
Meng-Ju Wu
Winston Ou
Alice Wen-Hsin Bi
Carol Yang
Bryan Y. Chen
Rong-Wei Pai
Po-Yen Yeh
Jo-Peng Chiang
Iu-Tshian Phoann
Winnie Chang
Chenxuan Cui
Noel Chen
Jiatong Shi

Abstract
Taiwanese Hokkien is declining in use and status due to a language shift towards Mandarin in Taiwan. This is partly why it is a low resource language in NLP and speech research today. To ensure that the state of the art in speech processing does not leave Taiwanese Hokkien behind, we contribute a 1.5-hour dataset of Taiwanese Hokkien to ML-SUPERB's hidden set. Evaluating ML-SUPERB's suite of self-supervised learning (SSL) speech representations on our dataset, we find that model size does not consistently determine performance. In fact, certain smaller models outperform larger ones. Furthermore, linguistic alignment between pretraining data and the target language plays a crucial role.
View on arXivComments on this paper