MARC보기
LDR00000nam u2200205 4500
001000000435052
00520200227114012
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781687972293
035 ▼a (MiAaPQ)AAI27602823
035 ▼a (MiAaPQ)OhioLINKosu1546539166677894
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 629.8
1001 ▼a El-Shaer, Mennat Allah Ahmed Mohammed.
24513 ▼a An Experimental Evaluation of Probabilistic Deep Networks for Real-time Traffic Scene Representation using Graphical Processing Units.
260 ▼a [S.l.]: ▼b The Ohio State University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 107 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-06, Section: B.
500 ▼a Advisor: Ozguner, Fusun
5021 ▼a Thesis (Ph.D.)--The Ohio State University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a The problem of scene understanding and environment perception has been an important one in robotics research, however existing solutions applied in current Advanced DrivingAssistance systems (ADAS) are not robust enough to ensure the safety of traffic participants. ADAS development begins with sensor data collection and algorithms that can interpret that data to guide the intelligent vehicle's control decisions. Much work has been done to extract information from camera based image sensors, however most solutions require hand-designed features that usually break down under different lighting and weather conditions.Urban traffic scenes, in particular, present a challenge to vision perception systems due to the dynamic interactions among participants whether they are pedestrians, bicyclists, or other vehicles. Object detection deep learning models have proved successful in classifying or identifying objects on the road, but do not allow for the probabilistic reasoning and learning that traffic situations require. Deep Generative Models that learn the data distribution of training sets are capable of generating samples from the trained model that better represent sensory data, which leads to better feature representations and eventually better perception systems. Learning such models is computationally intensive so we decide to utilize Graphics Processing chips designed for vision processing. In this thesis, we present a small image dataset collected from different types of busy intersections on a university campus along with our CUDA implementations of training a Restricted Boltzmann Machine on NVIDIA GTX1080 GPU, and its generative sampling inference on an NVIDIA Tegra X1 SoC module.We demonstrate the sampling capability of a simple unsupervised network trained on a subset of the dataset, along with proling results from experiments done on the Jetson TX1 platform. We also include a quantitative study of different GPU optimization techniques performed on the Jetson TX1.
590 ▼a School code: 0168.
650 4 ▼a Computer engineering.
650 4 ▼a Computer science.
650 4 ▼a Artificial intelligence.
650 4 ▼a Electrical engineering.
650 4 ▼a Robotics.
690 ▼a 0984
690 ▼a 0464
690 ▼a 0544
690 ▼a 0800
690 ▼a 0771
71020 ▼a The Ohio State University. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-06B.
773 ▼t Dissertation Abstract International
790 ▼a 0168
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15494552 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK