We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Combine-Net: An Improved Filter Pruning Algorithm.
- Authors
Wang, Jinghan; Li, Guangyue; Zhang, Wenzhao
- Abstract
The powerful performance of deep learning is evident to all. With the deepening of research, neural networks have become more complex and not easily generalized to resource-constrained devices. The emergence of a series of model compression algorithms makes artificial intelligence on edge possible. Among them, structured model pruning is widely utilized because of its versatility. Structured pruning prunes the neural network itself and discards some relatively unimportant structures to compress the model's size. However, in the previous pruning work, problems such as evaluation errors of networks, empirical determination of pruning rate, and low retraining efficiency remain. Therefore, we propose an accurate, objective, and efficient pruning algorithm—Combine-Net, introducing Adaptive BN to eliminate evaluation errors, the Kneedle algorithm to determine the pruning rate objectively, and knowledge distillation to improve the efficiency of retraining. Results show that, without precision loss, Combine-Net achieves 95% parameter compression and 83% computation compression on VGG16 on CIFAR10, 71% of parameter compression and 41% computation compression on ResNet50 on CIFAR100. Experiments on different datasets and models have proved that Combine-Net can efficiently compress the neural network's parameters and computation.
- Subjects
ARTIFICIAL intelligence; ALGORITHMS; DEEP learning; DISTILLATION; OCCUPATIONAL retraining; EDGE computing
- Publication
Information (2078-2489), 2021, Vol 12, Issue 7, p264
- ISSN
2078-2489
- Publication type
Article
- DOI
10.3390/info12070264