Zero-Shot Voice Conversion (VC) aims to transform the source speaker's timbre into an arbitrary unseen one while retaining speech content. Most prior work focuses on preserving the source's prosody, while fine-grained timbre information may leak through prosody, and transferring target prosody to synthesized speech is rarely studied. In light of this, we propose R-VC, a rhythm-controllable and efficient zero-shot voice conversion model. R-VC employs data perturbation techniques and discretize source speech into Hubert content tokens, eliminating much content-irrelevant information. By leveraging a Mask Generative Transformer for in-context duration modeling, our model adapts the linguistic content duration to the desired target speaking style, facilitating the transfer of the target speaker's rhythm. Furthermore, R-VC introduces a powerful Diffusion Transformer (DiT) with shortcut flow matching during training, conditioning the network not only on the current noise level but also on the desired step size, enabling high timbre similarity and quality speech generation in fewer sampling steps, even in just two, thus minimizing latency. Experimental results show that R-VC achieves comparable speaker similarity to state-of-the-art VC methods with a smaller dataset, and surpasses them in terms of speech naturalness, intelligibility and style transfer performance.
View on arXiv@article{zuo2025_2506.01014, title={ Rhythm Controllable and Efficient Zero-Shot Voice Conversion via Shortcut Flow Matching }, author={ Jialong Zuo and Shengpeng Ji and Minghui Fang and Mingze Li and Ziyue Jiang and Xize Cheng and Xiaoda Yang and Chen Feiyang and Xinyu Duan and Zhou Zhao }, journal={arXiv preprint arXiv:2506.01014}, year={ 2025 } }