246

Powerful Speaker Embedding Training Framework by Adversarially Disentangled Identity Representation

Abstract

Irrelevant information in speeches can seriously interfere with the performance of speaker verification. Particularly, the most popular datasets do not contain enough labels to overcome this challenge. In order to solve this problem, we propose a novel speaker embedding training framework based on explicitly disentangled identity representation. Our key insight is to disentangle the speaker information from the feature perspective leveraging adversarial learning methods. The adversarial supervision signal is introduced to disperse identity information, which assists in obtaining a superior identity-purified feature. Experiments prove that the framework we propose can significantly improve the performance of speaker verification from the original models without adjusting the structure and hyper-parameters of them. This suggests that adversarially disentangled representation is extremely useful for alleviating the lack of speaker labels.

View on arXiv
Comments on this paper