We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
EffShuffNet: An Efficient Neural Architecture for Adopting a Multi-Model.
- Authors
Kim, Jong-In; Yu, Gwang-Hyun; Lee, Jin; Vu, Dang Thanh; Kim, Jung-Hyun; Park, Hyun-Sun; Kim, Jin-Young; Hong, Sung-Hoon
- Abstract
This work discusses the challenges of multi-label image classification and presents a novel Efficient Shuffle Net (EffShuffNet) based on a convolutional neural network (CNN) architecture to address these challenges. Multi-label classification is difficult as the complexity of prediction increases with the number of labels and classes, and current multi-model approaches require optimized deep learning models which increase computational costs. The EffShuff block divides the input feature map into two parts and processes them differently, with one half undergoing a lightweight convolution and the other half undergoing average pooling. The EffShuff transition component shuffles the feature maps after lightweight convolution, resulting in a 57.9% reduction in computational cost compared to ShuffleNetv2. Furthermore, we propose EffShuff-Dense architecture, which incorporates Dense connection to further emphasize low-level features. In experiments, the EffShuffNet achieved 96.975% accuracy in age and gender classification, which is 5.83% higher than the state-of-the-art, while EffShuffDenseNet was even better with 97.63% accuracy. Additionally, the proposed models were found to have better classification performance with smaller model sizes in fine-grained image classification experiments.
- Subjects
DEEP learning; CONVOLUTIONAL neural networks; IMAGE recognition (Computer vision)
- Publication
Applied Sciences (2076-3417), 2023, Vol 13, Issue 6, p3505
- ISSN
2076-3417
- Publication type
Article
- DOI
10.3390/app13063505