Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine
oleh: Sungho Kim, Yongwoo Lee, Hee-Dong Kim, Sung-Jin Choi
Format: | Article |
---|---|
Diterbitkan: | IEEE 2020-01-01 |
Deskripsi
Resistive crossbar arrays can carry out energy-efficient vector-matrix multiplication, which is a crucial operation in most machine learning applications. However, practical computing tasks that require high precision remain challenging to implement in such arrays because of intrinsic device variability. Herein, we experimentally demonstrate a precision-extension technique whereby high precision can be attained through the combined operation of multiple devices, each of which stores a portion of the required bit width. Additionally, designed analog-to-digital converters are used to remove the unpredictable effects from noise sources. An 8 × 15 carbon nanotube transistor array can perform multiplication operation, where operands have up to 16 valid bits, without any error, making in-memory computing approaches attractive for high-throughput energy-efficient machine learning accelerators.