RSET: Remapping-based Sorting Method for Emotion Transfer Speech Synthesis

The architecture of RSET model

Abstract

Although current Text-To-Speech (TTS) models are able to generate high-quality speech samples, there are still challenges in developing emotion intensity controllable TTS. Most existing TTS models achieve emotion intensity control by extracting intensity information from reference speeches. Unfortunately, limited by the lack of modeling for intra-class emotion intensity and the model’s information decoupling capability, the generated speech cannot achieve fine-grained emotion intensity control and suffers from information leakage issues. In this paper, we propose an emotion transfer TTS model, which defines a remapping-based sorting method to model intra-class relative intensity information, combined with Mutual Information (MI) to decouple speaker and emotion information, and synthesizes expressive speeches with perceptible intensity differences. Experiments show that our model achieves fine-grained emotion control while preserving speaker information.

Type
Publication
In the 8th APWeb-WAIM International Joint Conference on Web and Big Data
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Haoxiang Shi
Haoxiang Shi
University of Science and Technology of China
Ning Cheng
Ning Cheng
Researcher