대구한의대학교 향산도서관

상세정보

부가기능

Unified Compositional Models for Visual Recognition

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Unified Compositional Models for Visual Recognition.
개인저자Tang, Wei.
단체저자명Northwestern University. Electrical and Computer Engineering.
발행사항[S.l.]: Northwestern University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항100 p.
기본자료 저록Dissertations Abstracts International 81-05B.
Dissertation Abstract International
ISBN9781088315606
학위논문주기Thesis (Ph.D.)--Northwestern University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Advisor: Wu, Ying.
이용제한사항This item must not be sold to any third party vendors.
요약A core problem in many computer vision applications is visual recognition (including object classification, detection and localization). Recent advances in artificial neural networks (aka "deep learning") have significantly pushed forward the state-of-the-art visual recognition performances. However, due to the lack of semantic structure modeling, most current deep learning approaches do not have explicit mechanisms for visual inference and reasoning. As a result, they are unable to explain, interpret, and understand the relations among visual entities. Moreover, since deep learning tries to fit a highly nonlinear function, it is data hungry and can overfit when the data is not big enough.This thesis studies deep and unified computational modeling for visual compositionality. It is a new mechanism to unify semantic structure modeling and deep learning into an effective learning framework for robust visual recognition. Visual compositionality refers to the decomposition of complex visual patterns into hierarchies of simpler ones. It not only embraces much stronger pattern expression powers, but also helps resolve the ambiguities in the smaller and lower-level visual patterns via larger and higher-level visual patterns.We first present a unified framework for compositional pattern modeling, inference and learning. Represented by And-Or graphs (AOGs), it jointly models the compositional structure, parts, features, and composition/sub-configuration relationships. We show that the inference algorithm of the proposed framework is equivalent to a feedforward network. Thus, all the parameters can be learned efficiently via the highly-scalable back-propagation (BP) in an end-to-end fashion. We validate the model via the task of handwritten digit recognition. By visualizing the processes of bottom-up composition and top-down parsing, we show that our model is fully interpretable, being able to learn the hierarchical compositions from visual primitives to visual patterns at increasingly higher levels. We apply this new compositional model to natural scene character recognition and generic object detection. Experimental results have demonstrated its effectiveness. We then introduce a novel deeply learned compositional model for human pose estimation (HPE). It exploits deep neural networks to learn the compositionality of human bodies. This results in a novel network with a hierarchical compositional architecture and bottom-up/top-down inference stages. In addition, we propose a novel bone-based part representation. It not only compactly encodes orientations, scales and shapes of parts, but also avoids their potentially large state spaces. With significantly lower complexities, our approach outperforms state-of-the-art methods on three benchmark datasets. Finally, we study how features can be learned in a compositional fashion. The motivation is that HPE is inherently a homogeneous multi-task learning problem, with the localization of each body part as a different task. Recent HPE approaches universally learn a shared representation for all parts, from which their locations are linearly regressed. However, our statistical analysis indicates not all parts are related to each other. As a result, such a sharing mechanism can lead to negative transfer and deteriorate the performance. To resolve this issue, we first propose a data-driven approach to group related parts based on how much information they share. Then a part-based branching network (PBN) is introduced to learn representations specific to each part group. We further present a multi-stage version of this network to repeatedly refine intermediate features and pose estimates. Ablation experiments indicate learning specific features significantly improves the localization of occluded parts and thus benefits HPE. Our approach also outperforms all state-of-the-art methods on two benchmark datasets, with an outstanding advantage when occlusion occurs.
일반주제명Electrical engineering.
Computer science.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼