You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Abstract: The goal of voice conversion is to transform source speech into a target voice, keeping the content unchanged. In this paper, we focus on self-supervised representation learning for voice conversion. Specifically, we compare discrete and soft speech units as input features. We find that discrete representations effectively remove speaker information but discard some linguistic content – leading to mispronunciations. As a solution, we propose soft speech units learned by predicting a distribution over the discrete units. By modeling uncertainty, soft units capture more content information, improving the intelligibility and naturalness of converted speech.
For modularity, each component of the system is housed in a separate repository. Please visit the following links for more details:
Fig 1: Architecture of the voice conversion system. a) The discrete content encoder clusters audio features to produce a sequence of discrete speech units. b) The soft content encoder is trained to predict the discrete units. The acoustic model transforms the discrete/soft speech units into a target spectrogram. The vocoder converts the spectrogram into an audio waveform.
Example Usage
Programmatic Usage
importtorch, torchaudio# Load the content encoder (either hubert_soft or hubert_discrete)hubert=torch.hub.load("bshall/hubert:main", "hubert_soft", trust_repo=True).cuda()
# Load the acoustic model (either hubert_soft or hubert_discrete)acoustic=torch.hub.load("bshall/acoustic-model:main", "hubert_soft", trust_repo=True).cuda()
# Load the vocoder (either hifigan_hubert_soft or hifigan_hubert_discrete)hifigan=torch.hub.load("bshall/hifigan:main", "hifigan_hubert_soft", trust_repo=True).cuda()
# Load the source audiosource, sr=torchaudio.load("path/to/wav")
assertsr==16000source=source.unsqueeze(0).cuda()
# Convert to the target speakerwithtorch.inference_mode():
# Extract speech unitsunits=hubert.units(source)
# Generate target spectrogrammel=acoustic.generate(units).transpose(1, 2)
# Generate audio waveformtarget=hifigan(mel)
Citation
If you found this work helpful please consider citing our paper:
@inproceedings{
soft-vc-2022,
author={van Niekerk, Benjamin and Carbonneau, Marc-André and Zaïdi, Julian and Baas, Matthew and Seuté, Hugo and Kamper, Herman},
booktitle={ICASSP},
title={A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion},
year={2022}
}