We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
A Spiking LSTM Accelerator for Automatic Speech Recognition Application Based on FPGA.
- Authors
Yin, Tingting; Dong, Feihong; Chen, Chao; Ouyang, Chenghao; Wang, Zheng; Yang, Yongkui
- Abstract
Long Short-Term Memory (LSTM) finds extensive application in sequential learning tasks, notably in speech recognition. However, existing accelerators tailored for traditional LSTM networks grapple with high power consumption, primarily due to the intensive matrix–vector multiplication operations inherent to LSTM networks. In contrast, the spiking LSTM network has been designed to avoid these multiplication operations by replacing multiplication and nonlinear functions with addition and comparison. In this paper, we present an FPGA-based accelerator specifically designed for spiking LSTM networks. Firstly, we employ a low-cost circuit in the LSTM gate to significantly reduce power consumption and hardware cost. Secondly, we propose a serial–parallel processing architecture along with hardware implementation to reduce inference latency. Thirdly, we quantize and efficiently deploy the synapses of the spiking LSTM network. The power consumption of the accelerator implemented on Artix-7 and Zynq-7000 is only about 1.1 W and 0.84 W, respectively, when performing the inference for speech recognition with the Free Spoken Digit Dataset (FSDD). Additionally, the energy consumed per inference is remarkably efficient, with values of 87 µJ and 66 µJ, respectively. In comparison with dedicated accelerators designed for traditional LSTM networks, our spiking LSTM accelerator achieves a remarkable reduction in power consumption, amounting to orders of magnitude.
- Subjects
AUTOMATIC speech recognition; ARTIFICIAL neural networks; SPEECH perception; SEQUENTIAL learning; NONLINEAR functions
- Publication
Electronics (2079-9292), 2024, Vol 13, Issue 5, p827
- ISSN
2079-9292
- Publication type
Article
- DOI
10.3390/electronics13050827