24
0

Advancing Zero-shot Text-to-Speech Intelligibility across Diverse Domains via Preference Alignment

Abstract

Modern zero-shot text-to-speech (TTS) systems, despite using extensive pre-training, often struggle in challenging scenarios such as tongue twisters, repeated words, code-switching, and cross-lingual synthesis, leading to intelligibility issues. To address these limitations, this paper leverages preference alignment techniques, which enable targeted construction of out-of-pretraining-distribution data to enhance performance. We introduce a new dataset, named the Intelligibility Preference Speech Dataset (INTP), and extend the Direct Preference Optimization (DPO) framework to accommodate diverse TTS architectures. After INTP alignment, in addition to intelligibility, we observe overall improvements including naturalness, similarity, and audio quality for multiple TTS models across diverse domains. Based on that, we also verify the weak-to-strong generalization ability of INTP for more intelligible models such as CosyVoice 2 and Ints. Moreover, we showcase the potential for further improvements through iterative alignment based on Ints. Audio samples are available atthis https URL.

View on arXiv
@article{zhang2025_2505.04113,
  title={ Advancing Zero-shot Text-to-Speech Intelligibility across Diverse Domains via Preference Alignment },
  author={ Xueyao Zhang and Yuancheng Wang and Chaoren Wang and Ziniu Li and Zhuo Chen and Zhizheng Wu },
  journal={arXiv preprint arXiv:2505.04113},
  year={ 2025 }
}
Comments on this paper