Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
StarSPA: Stride-Aware Sparsity Compression for Efficient CNN Acceleration
oleh: Ngoc-Son Pham, Sangwon Shin, Lei Xu, Weidong Shi, Taeweon Suh
Format: | Article |
---|---|
Diterbitkan: | IEEE 2024-01-01 |
Deskripsi
The presence of sparsity in both input features and weights within convolutional neural networks offers a valuable opportunity to significantly reduce the number of computations required during inference. Moreover, the practice of compressing input data serves to diminish storage requirements and lower data transfer costs, ultimately enhancing overall power efficiency. However, the compression of randomly sparse inputs introduces challenges in the input matching process, often resulting in substantial hardware overhead and increased power consumption. These challenges arise due to the irregular nature of sparse inputs and the variability in convolutional strides. In response to these challenges, this research introduces an innovative data compression method, named <underline>St</underline>ride-<underline>A</underline>wa<underline>r</underline>e <underline>Spa</underline>rsity Compression (StarSPA), designed to effectively locate valid input values and expedite the multiplication process. To fully capitalize on this proposed compression method, a weight-stationary approach is employed for efficient convolution. Comprehensive simulations demonstrate that the proposed accelerator achieves speedup factors of <inline-formula> <tex-math notation="LaTeX">$1.17\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$1.05\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$1.09\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$1.23\times $ </tex-math></inline-formula>, and <inline-formula> <tex-math notation="LaTeX">$1.12\times $ </tex-math></inline-formula> when compared to the recent accelerator named SparTen for AlexNet, VGG16, GoogLeNet, ResNet34, and EfficientNetV2, respectively. Furthermore, FPGA implementation of the core reveals a noteworthy <inline-formula> <tex-math notation="LaTeX">$2.55\times $ </tex-math></inline-formula> reduction in hardware size and a <inline-formula> <tex-math notation="LaTeX">$5\times $ </tex-math></inline-formula> enhancement in energy efficiency when compared to SparTen.