MARC보기
LDR00000nam u2200205 4500
001000000434230
00520200226142930
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781392370957
035 ▼a (MiAaPQ)AAI22587031
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 153
1001 ▼a Xia, Ye.
24510 ▼a Driver Eye Movements and the Application in Autonomous Driving.
260 ▼a [S.l.]: ▼b University of California, Berkeley., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 65 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-06, Section: B.
500 ▼a Advisor: Whitney, David.
5021 ▼a Thesis (Ph.D.)--University of California, Berkeley, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Despite the exciting progress in computer vision in the field of autonomous driving, understanding efficiently which cues or objects are the most crucial ones in a crowded traffic scene is still a big challenge. Human drivers can quickly identify the important visual cues or objects in their blurry periphery vision and then make eye movements to direct their more accurate foveal vision to the important regions. Therefore, driver eye movements may be what computer vision can borrow from human vision to make autonomous driving systems better at locating and understanding the important regions of crowded traffic scenes. Meanwhile, the large-scale datasets and advanced object recognition algorithms that emerged in the field of autonomous driving make it possible to study classical human vision science problems in natural driving situations. Here, we used driver eye movements to improve autonomous driving models and studied visual crowding-the bottleneck of human object recognition-in realistic driving situations through driver eye movements. First, we developed a new protocol that collects driver eye movements in an offline manner for large-scale driving video datasets. We built a deep neural network that predicts human driver gaze from dash camera videos for various driving scenarios. Our model outperformed the current state-of-the-art model. Furthermore, we incorporated the driver gaze prediction model into an autonomous driving model to make a new periphery-fovea multi-resolution driving model that predicts vehicle speed from dash camera videos. This model combines low-resolution input of the whole video frames and high-resolution input from predicted gaze locations to predict vehicle speed. We show that the added human gaze significantly improves the driving accuracy and that our periphery-fovea multi-resolution model outperforms a uni-resolution periphery-only model that has the same amount of floating-point operations. Finally, we studied visual crowding in driving situations. We show that crowding occurs in natural driving scenes and that the degree of crowding correlates with altered saccade localization in realistic driving-like situations. Together, these studies demonstrate the application of driver eye movements in making safer and more efficient autonomous driving models and show strong evidence of visual crowding in driving situations via the analysis of driver eye movements. These studies also present examples of combining human vision and computer vision to get mutual benefits from both fields.
590 ▼a School code: 0028.
650 4 ▼a Cognitive psychology.
690 ▼a 0633
71020 ▼a University of California, Berkeley. ▼b Psychology.
7730 ▼t Dissertations Abstracts International ▼g 81-06B.
773 ▼t Dissertation Abstract International
790 ▼a 0028
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15492958 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK