Medical Speech Symptoms Classification via Disentangled Representation

Inference stage

Abstract

Intent is defined for understanding spoken language in existing works. Both textual features and acoustic features involved in medical speech contain intent, which is important for symptomatic diagnosis. In this paper, we propose a medical speech classification model named DRSC that automatically learns to disentangle intent and content representations from textual-acoustic data for classification. The intent representations of the text domain and the Mel-spectrogram domain are extracted via intent encoders, and then the reconstructed text feature and the Mel-spectrogram feature are obtained through two exchanges. After combining the intent from two domains into a joint representation, the integrated intent representation is fed into a decision layer for classification. Experimental results show that our model obtains an average accuracy rate of 95% in detecting 25 different medical symptoms.

Type
Publication
In 27th International Conference on Computer Supported Cooperative Work in Design
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Pengcheng Li
Pengcheng Li
University of Science and Technology of China