Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
A Relaxed Quantization Training Method for Hardware Limitations of Resistive Random Access Memory (ReRAM)-Based Computing-in-Memory
oleh: Wei-Chen Wei, Chuan-Jia Jhang, Yi-Ren Chen, Cheng-Xin Xue, Syuan-Hao Sie, Jye-Luen Lee, Hao-Wen Kuo, Chih-Cheng Lu, Meng-Fan Chang, Kea-Tiong Tang
Format: | Article |
---|---|
Diterbitkan: | IEEE 2020-01-01 |
Deskripsi
Nonvolatile computing-in-memory (nvCIM) exhibits high potential for neuromorphic computing involving massive parallel computations and for achieving high energy efficiency. nvCIM is especially suitable for deep neural networks, which are required to perform large amounts of matrix-vector multiplications. However, a comprehensive quantization algorithm has yet to be developed, which overcomes the hardware limitations of resistive random access memory (ReRAM)-based nvCIM, such as the number of I/O, word lines (WLs), and ADC outputs. In this article, we propose a quantization training method for compressing deep models. The method comprises three steps: input and weight quantization, ReRAM convolution (ReConv), and ADC quantization. ADC quantization optimizes the error sampling problem by using the Gumbel-softmax trick. Under a 4-bit ADC of nvCIM, the accuracy only decreases by 0.05% and 1.31% for the MNIST and CIFAR-10, respectively, compared with the corresponding accuracies obtained under an ideal ADC. The experimental results indicate that the proposed method is effective for compensating the hardware limitations of nvCIM macros.