MARC보기
LDR00000nam u2200205 4500
001000000434363
00520200226150128
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781392619315
035 ▼a (MiAaPQ)AAI22620205
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 620
1001 ▼a Wu, Bichen.
24510 ▼a Efficient Deep Neural Networks.
260 ▼a [S.l.]: ▼b University of California, Berkeley., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 159 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: Keutzer, Kurt.
5021 ▼a Thesis (Ph.D.)--University of California, Berkeley, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a The success of deep neural networks (DNNs) is attributable to three factors: increased compute capacity, more complex models, and more data. These factors, however, are not always present, especially for edge applications such as autonomous driving, augmented reality, and internet-of-things. Training DNNs requires a large amount of data, which is difficult to obtain. Edge devices such as mobile phones have limited compute capacity, and therefore, require specialized and efficient DNNs. However, due to the enormous design space and prohibitive training costs, designing efficient DNNs for different target devices is challenging. So the question is, with limited data, compute capacity, and model complexity, can we still successfully apply deep neural networks?This dissertation focuses on the above problems and improving the efficiency of deep neural networks at four levels. Model efficiency: we designed neural networks for various computer vision tasks and achieved more than 10x faster speed and lower energy. Data efficiency: we developed an advanced tool that enables 6.2x faster annotation of a LiDAR point cloud. We also leveraged domain adaptation to utilize simulated data, bypassing the need for real data. Hardware efficiency: we co-designed neural networks and hardware accelerators and achieved 11.6x faster inference. Design efficiency: the process of finding the optimal neural networks is time-consuming. Our automated neural architecture search algorithms discovered, using 421x lower computational cost than previous search methods, models with state-of-the-art accuracy and efficiency.
590 ▼a School code: 0028.
650 4 ▼a Artificial intelligence.
650 4 ▼a Engineering.
690 ▼a 0800
690 ▼a 0537
71020 ▼a University of California, Berkeley. ▼b Electrical Engineering & Computer Sciences.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0028
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15493696 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK