30
2

SF-Speech: Straightened Flow for Zero-Shot Voice Clone

Abstract

Recently, neural ordinary differential equations (ODE) models trained with flow matching have achieved impressive performance on the zero-shot voice clone task. Nevertheless, postulating standard Gaussian noise as the initial distribution of ODE gives rise to numerous intersections within the fitted targets of flow matching, which presents challenges to model training and enhances the curvature of the learned generated trajectories. These curved trajectories restrict the capacity of ODE models for generating desirable samples with a few steps. This paper proposes SF-Speech, a novel voice clone model based on ODE and in-context learning. Unlike the previous works, SF-Speech adopts a lightweight multi-stage module to generate a more deterministic initial distribution for ODE. Without introducing any additional loss function, we effectively straighten the curved reverse trajectories of the ODE model by jointly training it with the proposed module. Experiment results on datasets of various scales show that SF-Speech outperforms the state-of-the-art zero-shot TTS methods and requires only a quarter of the solver steps, resulting in a generation speed approximately 3.7 times that of Voicebox and E2 TTS. Audio samples are available at the demo page\footnote{[Online] Available:this https URL}.

View on arXiv
@article{li2025_2410.12399,
  title={ SF-Speech: Straightened Flow for Zero-Shot Voice Clone },
  author={ Xuyuan Li and Zengqiang Shang and Hua Hua and Peiyang Shi and Chen Yang and Li Wang and Pengyuan Zhang },
  journal={arXiv preprint arXiv:2410.12399},
  year={ 2025 }
}
Comments on this paper