MARC보기
LDR00000nam u2200205 4500
001000000431830
00520200224105627
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781088315606
035 ▼a (MiAaPQ)AAI13899092
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Tang, Wei.
24510 ▼a Unified Compositional Models for Visual Recognition.
260 ▼a [S.l.]: ▼b Northwestern University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 100 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: Wu, Ying.
5021 ▼a Thesis (Ph.D.)--Northwestern University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a A core problem in many computer vision applications is visual recognition (including object classification, detection and localization). Recent advances in artificial neural networks (aka "deep learning") have significantly pushed forward the state-of-the-art visual recognition performances. However, due to the lack of semantic structure modeling, most current deep learning approaches do not have explicit mechanisms for visual inference and reasoning. As a result, they are unable to explain, interpret, and understand the relations among visual entities. Moreover, since deep learning tries to fit a highly nonlinear function, it is data hungry and can overfit when the data is not big enough.This thesis studies deep and unified computational modeling for visual compositionality. It is a new mechanism to unify semantic structure modeling and deep learning into an effective learning framework for robust visual recognition. Visual compositionality refers to the decomposition of complex visual patterns into hierarchies of simpler ones. It not only embraces much stronger pattern expression powers, but also helps resolve the ambiguities in the smaller and lower-level visual patterns via larger and higher-level visual patterns.We first present a unified framework for compositional pattern modeling, inference and learning. Represented by And-Or graphs (AOGs), it jointly models the compositional structure, parts, features, and composition/sub-configuration relationships. We show that the inference algorithm of the proposed framework is equivalent to a feedforward network. Thus, all the parameters can be learned efficiently via the highly-scalable back-propagation (BP) in an end-to-end fashion. We validate the model via the task of handwritten digit recognition. By visualizing the processes of bottom-up composition and top-down parsing, we show that our model is fully interpretable, being able to learn the hierarchical compositions from visual primitives to visual patterns at increasingly higher levels. We apply this new compositional model to natural scene character recognition and generic object detection. Experimental results have demonstrated its effectiveness. We then introduce a novel deeply learned compositional model for human pose estimation (HPE). It exploits deep neural networks to learn the compositionality of human bodies. This results in a novel network with a hierarchical compositional architecture and bottom-up/top-down inference stages. In addition, we propose a novel bone-based part representation. It not only compactly encodes orientations, scales and shapes of parts, but also avoids their potentially large state spaces. With significantly lower complexities, our approach outperforms state-of-the-art methods on three benchmark datasets. Finally, we study how features can be learned in a compositional fashion. The motivation is that HPE is inherently a homogeneous multi-task learning problem, with the localization of each body part as a different task. Recent HPE approaches universally learn a shared representation for all parts, from which their locations are linearly regressed. However, our statistical analysis indicates not all parts are related to each other. As a result, such a sharing mechanism can lead to negative transfer and deteriorate the performance. To resolve this issue, we first propose a data-driven approach to group related parts based on how much information they share. Then a part-based branching network (PBN) is introduced to learn representations specific to each part group. We further present a multi-stage version of this network to repeatedly refine intermediate features and pose estimates. Ablation experiments indicate learning specific features significantly improves the localization of occluded parts and thus benefits HPE. Our approach also outperforms all state-of-the-art methods on two benchmark datasets, with an outstanding advantage when occlusion occurs.
590 ▼a School code: 0163.
650 4 ▼a Electrical engineering.
650 4 ▼a Computer science.
690 ▼a 0544
690 ▼a 0984
71020 ▼a Northwestern University. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0163
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15492017 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK