FastGraphTTS: An Ultrafast Syntax-Aware Speech Synthesis Framework

The framework of FastGraphTTS

Abstract

This paper integrates graph-to-sequence into an end-to-end text-to-speech framework for syntax-aware modelling with syntactic information of input text. Specifically, the input text is parsed by a dependency parsing module to form a syntactic graph. The syntactic graph is then encoded by a graph encoder to extract the syntactic hidden information, which is concatenated with phoneme embedding and input to the alignment and flow-based decoding modules to generate the raw audio waveform. The model is experimented on two languages, English and Mandarin, using single-speaker, few samples of target speakers, and multi-speaker datasets, respectively. Experimental results show better prosodic consistency performance between input text and generated audio, and also get higher scores in the subjective prosodic evaluation, and show the ability of voice conversion. Besides, the efficiency of the model is largely boosted through the design of the AI chip operator with 5x acceleration.

Type
Publication
In 2023 IEEE 35th International Conference on Tools with Artificial Intelligence
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Aolan Sun
Aolan Sun
Engineer