자료유형 | 학위논문 |
---|---|
서명/저자사항 | An Experimental Evaluation of Probabilistic Deep Networks for Real-time Traffic Scene Representation using Graphical Processing Units. |
개인저자 | El-Shaer, Mennat Allah Ahmed Mohammed. |
단체저자명 | The Ohio State University. Electrical and Computer Engineering. |
발행사항 | [S.l.]: The Ohio State University., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 107 p. |
기본자료 저록 | Dissertations Abstracts International 81-06B. Dissertation Abstract International |
ISBN | 9781687972293 |
학위논문주기 | Thesis (Ph.D.)--The Ohio State University, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-06, Section: B.
Advisor: Ozguner, Fusun |
이용제한사항 | This item must not be sold to any third party vendors. |
요약 | The problem of scene understanding and environment perception has been an important one in robotics research, however existing solutions applied in current Advanced DrivingAssistance systems (ADAS) are not robust enough to ensure the safety of traffic participants. ADAS development begins with sensor data collection and algorithms that can interpret that data to guide the intelligent vehicle's control decisions. Much work has been done to extract information from camera based image sensors, however most solutions require hand-designed features that usually break down under different lighting and weather conditions.Urban traffic scenes, in particular, present a challenge to vision perception systems due to the dynamic interactions among participants whether they are pedestrians, bicyclists, or other vehicles. Object detection deep learning models have proved successful in classifying or identifying objects on the road, but do not allow for the probabilistic reasoning and learning that traffic situations require. Deep Generative Models that learn the data distribution of training sets are capable of generating samples from the trained model that better represent sensory data, which leads to better feature representations and eventually better perception systems. Learning such models is computationally intensive so we decide to utilize Graphics Processing chips designed for vision processing. In this thesis, we present a small image dataset collected from different types of busy intersections on a university campus along with our CUDA implementations of training a Restricted Boltzmann Machine on NVIDIA GTX1080 GPU, and its generative sampling inference on an NVIDIA Tegra X1 SoC module.We demonstrate the sampling capability of a simple unsupervised network trained on a subset of the dataset, along with proling results from experiments done on the Jetson TX1 platform. We also include a quantitative study of different GPU optimization techniques performed on the Jetson TX1. |
일반주제명 | Computer engineering. Computer science. Artificial intelligence. Electrical engineering. Robotics. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |