End-to-End Zero-Shot Voice Conversion with Location-Variable
Convolutions
Zero-shot voice conversion is becoming an increasingly popular research direction, as it promises the ability to transform speech to match the vocal identity of any speaker. However, relatively little work has been done on end-to-end methods for this task, which are appealing because they remove the need for a separate vocoder to generate audio from intermediate features. In this work, we propose LVC-VC, an end-to-end zero-shot voice conversion model that uses location-variable convolutions (LVCs) to jointly model the conversion and speech synthesis processes with a small number of parameters. LVC-VC utilizes carefully designed input features that have disentangled content and speaker style information, and the neural vocoder-like architecture learns to combine them to perform voice conversion while simultaneously synthesizing audio. Experiments show that our model achieves competitive or better voice conversion performance compared to several baselines while maintaining intelligibility particularly well.
View on arXiv