We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
JSUM: A Multitask Learning Speech Recognition Model for Jointly Supervised and Unsupervised Learning.
- Authors
Yolwas, Nurmemet; Meng, Weijing
- Abstract
In recent years, the end-to-end speech recognition model has emerged as a popular alternative to the traditional Deep Neural Network—Hidden Markov Model (DNN-HMM). This approach maps acoustic features directly onto text sequences via a single network architecture, significantly streamlining the model construction process. However, the training of end-to-end speech recognition models typically necessitates a significant quantity of supervised data to achieve good performance, which poses a challenge in low-resource conditions. The use of unsupervised representation significantly reduces this necessity. Recent research has focused on end-to-end techniques employing joint Connectionist Temporal Classification (CTC) and attention mechanisms, with some also concentrating on unsupervised presentation learning. This paper proposes a joint supervised and unsupervised multi-task learning model (JSUM). Our approach leverages the unsupervised pre-trained wav2vec 2.0 model as a shared encoder that integrates the joint CTC-Attention network and the generative adversarial network into a unified end-to-end architecture. Our method provides a new low-resource language speech recognition solution that optimally utilizes supervised and unsupervised datasets by combining CTC, attention, and generative adversarial losses. Furthermore, our proposed approach is suitable for both monolingual and cross-lingual scenarios.
- Subjects
AUTOMATIC speech recognition; SPEECH perception; GENERATIVE adversarial networks; MARKOV processes
- Publication
Applied Sciences (2076-3417), 2023, Vol 13, Issue 9, p5239
- ISSN
2076-3417
- Publication type
Article
- DOI
10.3390/app13095239