MARC보기
LDR00000nam u2200205 4500
001000000436791
00520200228153033
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085564748
035 ▼a (MiAaPQ)AAI13807790
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Klawonn, Matthew.
24510 ▼a Combining Supervised Machine Learning and Structured Knowledge for Difficult Perceptual Tasks.
260 ▼a [S.l.]: ▼b Rensselaer Polytechnic Institute., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 141 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-02, Section: B.
500 ▼a Advisor: Hendler, James.
5021 ▼a Thesis (Ph.D.)--Rensselaer Polytechnic Institute, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Learning models of visual perception lies at the heart of a number of computer vision problems, including object detection, image description, motion tracking, and more. There are a variety of models which may complete such tasks, though the tasks themselves are usually assumed to be consistent in their requirements: receive visual input, and perceive some desired content in said input. Yet for certain tasks, the desired outputs are very difficult to predict given input images alone. Many perceptual tasks require not only the ability to parse content of a visual scene, but also the ability to combine visual information with auxiliary knowledge to reach conclusions. Rather than attempt to incorporate auxiliary knowledge into the parameters of a learned model, this work presents an alternative approach.We hypothesize that there are significant benefits in training perceptual models such that they can interact successfully with external information, while keeping said information external. Towards validating this hypothesis, we take the following steps. Firstly, we motivate representing external knowledge in a structured, symbolic form, a choice based in the flexibility and expressivity of knowledge representation and reasoning techniques. We then create and evaluate a novel perceptual model that produces scene graphs, an output that can be combined with structured symbolic knowledge to produce complex inferences. Experiments show that this model performs comparably to the state of the art using standard benchmarks and metrics, while holding significant advantages in the flexibility of its training setup and the variety of outputs it can produce.In order to improve compatibility between the scene graph generator and any external knowledge that is available to produce inferences, we also develop a novel meta-learning method for directing the training process of learning algorithms. Specifically, our method learns in an online fashion to select training data for which a given model has good performance. When combined with the scene graph generator, this meta-learning algorithm facilitates a clean split between learned knowledge and external knowledge. The meta-learning algorithm distills information in the training data and in the external knowledge, constructing training scene graphs that are ``learnable", while leaving remaining information to be used during inferencing. We test our approach on a semantic search task, showing that the combination of learned perceptual model, meta-learning algorithm, and structured knowledge inferencing techniques perform better together than they do separately.
590 ▼a School code: 0185.
650 4 ▼a Artificial intelligence.
650 4 ▼a Computer science.
690 ▼a 0800
690 ▼a 0984
71020 ▼a Rensselaer Polytechnic Institute. ▼b Computer Science.
7730 ▼t Dissertations Abstracts International ▼g 81-02B.
773 ▼t Dissertation Abstract International
790 ▼a 0185
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15490513 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK