대구한의대학교 향산도서관

상세정보

부가기능

Resource Efficient and Error Resilient Neural Networks

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Resource Efficient and Error Resilient Neural Networks.
개인저자Lin, Jeng-Hau.
단체저자명University of California, San Diego. Computer Science and Engineering.
발행사항[S.l.]: University of California, San Diego., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항143 p.
기본자료 저록Dissertations Abstracts International 81-03B.
Dissertation Abstract International
ISBN9781085629652
학위논문주기Thesis (Ph.D.)--University of California, San Diego, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
Advisor: Gupta, Rajesh K.
이용제한사항This item must not be sold to any third party vendors.This item must not be added to any third party search indexes.
요약The entangled guardbands in terms of timing specification and energy budget ensure a system against faults, but the guardbands, meanwhile, impede the advance of a higher throughput and energy efficiency. To combat the over-designed guardbands in a system carrying out deep learning inference, we dive into the algorithmic demands and understand that the resource deficiency and hardware variation are the major reasons of the need of conservative guardbands. In modern convolutional neural networks (CNNs), the number of arithmetic operations for the inference could exceed tens of billions, which requires a sophisticated buffering mechanism to balance between resource utilization and throughput. In this case, the over-designed guardbands can seriously hinder system performance. On the other hand, timing errors can be incurred by the hardware variations including momentary voltage droops resulted from simultaneous switching noises, a gradually decreasing voltage level due to a limited battery, and the slow electron mobility incurred by the system power dissipation into heat. The timing errors propagating in a network can be a snowball in the beginning but ends up with a catastrophe in terms of a significant accuracy degradation.Knowing the need of guardbands originates from resource deficiency and timing errors, this dissertation focuses on cross-layer solutions to the problems of the high algorithmic demands incurred by deep learning methods and error vulnerability due to hardware variations. We begin with reviewing the methods and technologies proposed in the literature including weight encoding, filter decomposition, network pruning, efficient structure design, and precision quantizing. In the implementation of an FPGA accelerator for extreme-case quantization, binarized neural networks (BNN), we have realized more possible optimizations can be applied. Then, we extend BNN on the algorithmic layer with the binarized separable filters and proposed BCNNw/SF. Although the quantization and approximation benefit hardware efficiency to a certain extent, the optimal reduction or compression rate is still limited by the core of the conventional deep learning methods -- convolution. We, thus, introduce the local binary pattern (LBP) to deep learning because of LBP's low complexity yet high effectiveness. We name the new algorithm LBPNet, in which the feature maps are created with a similar fashion of the traditional LBP using comparisons. Our LBPNet can be trained with the forward-backward propagation algorithm to extract useful features for image classification. LBPNet accelerators have been implemented and optimized to verify their classification performance, processing throughput, and energy efficiency. We also demonstrate the error immunity of LBPNet to be the strongest compared with the subject MLP, CNN, and BCNN models since the classification accuracy of the LBPNet is decreased by only 10% and all the other models lose the classification ability when the timing error rate exceeds 0.01.
일반주제명Computer science.
Computer engineering.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼