MARC보기
LDR00000nam u2200205 4500
001000000435422
00520200228094826
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085629652
035 ▼a (MiAaPQ)AAI13878611
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 621
1001 ▼a Lin, Jeng-Hau.
24510 ▼a Resource Efficient and Error Resilient Neural Networks.
260 ▼a [S.l.]: ▼b University of California, San Diego., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 143 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
500 ▼a Advisor: Gupta, Rajesh K.
5021 ▼a Thesis (Ph.D.)--University of California, San Diego, 2019.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item must not be added to any third party search indexes.
520 ▼a The entangled guardbands in terms of timing specification and energy budget ensure a system against faults, but the guardbands, meanwhile, impede the advance of a higher throughput and energy efficiency. To combat the over-designed guardbands in a system carrying out deep learning inference, we dive into the algorithmic demands and understand that the resource deficiency and hardware variation are the major reasons of the need of conservative guardbands. In modern convolutional neural networks (CNNs), the number of arithmetic operations for the inference could exceed tens of billions, which requires a sophisticated buffering mechanism to balance between resource utilization and throughput. In this case, the over-designed guardbands can seriously hinder system performance. On the other hand, timing errors can be incurred by the hardware variations including momentary voltage droops resulted from simultaneous switching noises, a gradually decreasing voltage level due to a limited battery, and the slow electron mobility incurred by the system power dissipation into heat. The timing errors propagating in a network can be a snowball in the beginning but ends up with a catastrophe in terms of a significant accuracy degradation.Knowing the need of guardbands originates from resource deficiency and timing errors, this dissertation focuses on cross-layer solutions to the problems of the high algorithmic demands incurred by deep learning methods and error vulnerability due to hardware variations. We begin with reviewing the methods and technologies proposed in the literature including weight encoding, filter decomposition, network pruning, efficient structure design, and precision quantizing. In the implementation of an FPGA accelerator for extreme-case quantization, binarized neural networks (BNN), we have realized more possible optimizations can be applied. Then, we extend BNN on the algorithmic layer with the binarized separable filters and proposed BCNNw/SF. Although the quantization and approximation benefit hardware efficiency to a certain extent, the optimal reduction or compression rate is still limited by the core of the conventional deep learning methods -- convolution. We, thus, introduce the local binary pattern (LBP) to deep learning because of LBP's low complexity yet high effectiveness. We name the new algorithm LBPNet, in which the feature maps are created with a similar fashion of the traditional LBP using comparisons. Our LBPNet can be trained with the forward-backward propagation algorithm to extract useful features for image classification. LBPNet accelerators have been implemented and optimized to verify their classification performance, processing throughput, and energy efficiency. We also demonstrate the error immunity of LBPNet to be the strongest compared with the subject MLP, CNN, and BCNN models since the classification accuracy of the LBPNet is decreased by only 10% and all the other models lose the classification ability when the timing error rate exceeds 0.01.
590 ▼a School code: 0033.
650 4 ▼a Computer science.
650 4 ▼a Computer engineering.
690 ▼a 0984
690 ▼a 0464
71020 ▼a University of California, San Diego. ▼b Computer Science and Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-03B.
773 ▼t Dissertation Abstract International
790 ▼a 0033
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15491106 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK