Multi-Quartznet: Multi-Resolution Convolution for Speech Recognition with Multi-Layer Feature Fusion

Multi-QuartzNet Model Architecture

Abstract

In this paper, we propose an end-to-end speech recognition network based on Nvidia’s previous QuartzNet [1] model. We try to promote the model performance, and design three components{:} (1) Multi-Resolution Convolution Module, re-places the original 1D time-channel separable convolution with multi-stream convolutions. Each stream has a unique dilated stride on convolutional operations. (2) Channel-Wise Attention Module, calculates the attention weight of each convolutional stream by spatial channel-wise pooling. (3) Multi-Layer Feature Fusion Module, reweights each convolutional block by global multi-layer feature maps. Our experiments demonstrate that Multi-QuartzNet model achieves CER 6.77% on AISHELL-1 data set, which outperforms original QuartzNet and is close to state-of-art result.

Type
Publication
In 2021 IEEE Spoken Language Technology Workshop
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Jian Luo
Jian Luo
Researcher
Ning Cheng
Ning Cheng
Researcher