We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
FasterAI: A Lightweight Library for Neural Networks Compression.
- Authors
Hubens, Nathan; Mancas, Matei; Gosselin, Bernard; Preda, Marius; Zaharia, Titus
- Abstract
FasterAI is a PyTorch-based library, aiming to facilitate the use of deep neural network compression techniques, such as sparsification, pruning, knowledge distillation, or regularization. The library is built with the purpose of enabling quick implementation and experimentation. More particularly, compression techniques are leveraging callback systems of libraries, such as fastai and Pytorch Lightning to propose a user-friendly and high-level API. The main asset of FasterAI is its lightweight, yet powerful, simplicity of use. Indeed, because it has been developed in a very granular way, users can create thousands of unique experiments by using different combinations of parameters, with only a single line of additional code. This allows FasterAI to be suited for practical usage, as it contains the most common compression techniques available out-of-the-box, but also for research, as implementing a new compression technique usually boils down to writing a single line of code. In this paper, we propose an in-depth presentation of the different compression techniques available in FasterAI. As a proof of concept and to better grasp how the library is used, we present results achieved by applying each technique on a ResNet-18 architecture, trained on CALTECH-101.
- Subjects
LIBRARY information networks; ARTIFICIAL neural networks; LIBRARY cooperation
- Publication
Electronics (2079-9292), 2022, Vol 11, Issue 22, p3789
- ISSN
2079-9292
- Publication type
Article
- DOI
10.3390/electronics11223789