EmoMix: Emotion Mixing via Diffusion Models for Emotional Speech Synthesis

The overview architecture for EmoMix

Abstract

There has been significant progress in emotional Text-To-Speech (TTS) synthesis technology in recent years. However, existing methods primarily focus on the synthesis of a limited number of emotion types and have achieved unsatisfactory performance in intensity control. To address these limitations, we propose EmoMix, which can generate emotional speech with specified intensity or a mixture of emotions. Specifically, EmoMix is a controllable emotional TTS model based on a diffusion probabilistic model and a pre-trained speech emotion recognition (SER) model used to extract emotion embedding. Mixed emotion synthesis is achieved by combining the noises predicted by diffusion model conditioned on different emotions during only one sampling process at the run-time. We further apply the Neutral and specific primary emotion mixed in varying degrees to control intensity. Experimental results validate the effectiveness of EmoMix for synthesizing mixed emotion and intensity control.

Type
Publication
In 24th Annual Conference of the International Speech Communication Association
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Haobin Tang
Haobin Tang
University of Science and Technology of China
Ning Cheng
Ning Cheng
Researcher