We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Speaker Verification Based on Teacher-Free Knowledge Distillation Model.
- Authors
XIAO Jinzhuang; LI Ruipeng; JI Mengmeng
- Abstract
The text- independent speaker verification models achieve powerful performance through complex network structure and changeable feature extraction methods, however, they need huge memory consumption and incremental com- puting costs, which makes it difficult to deploy the models on resource-limited hardware facilities. Focusing on this prob- lem, this research takes advantage of the teacher-free knowledge distillation (Tf-KD)model, which can bring one hundred percent classification accuracy and smoothing output probability distribution to establish a teacher-free speaker verification (Tf-SV)model based on a lightweight residual network. At the same time, the spatial-shared and channel-wise dynamic rectified linear units function and the additive angular margin loss function (AAM-Softmax)are introduced, which greatly improve the performance of the proposed model in terms of feature expression, training efficiency and compressed model's capabilities, and finally achieve the aim of deploying the given Tf-SV model on limited-storage or limited-computing facilities. Based on the VoxCeleb1 dataset, experimental results show that the equal error rate (EER)of the Tf-SV model is reduced to 3.4%. This is a significant improvement over the published results, and demonstrates the effectiveness of the compression model on the speaker verification task.
- Subjects
DISTRIBUTION (Probability theory); MACHINE learning; ADDITIVE functions; FEATURE extraction
- Publication
Journal of Computer Engineering & Applications, 2022, Vol 58, Issue 8, p198
- ISSN
1002-8331
- Publication type
Article
- DOI
10.3778/j.issn.1002-8331.2012-0298