MARC보기
LDR00000nam u2200205 4500
001000000433253
00520200225132449
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085792318
035 ▼a (MiAaPQ)AAI13886262
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 621.3
1001 ▼a Hendricks, Lisa Anne Marie.
24510 ▼a Visual Understanding through Natural Language.
260 ▼a [S.l.]: ▼b University of California, Berkeley., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 150 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: A.
500 ▼a Advisor: Darrell, Trevor.
5021 ▼a Thesis (Ph.D.)--University of California, Berkeley, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Powered by deep convolutional networks and large scale visual datasets, modern computer vision systems are capable of accurately recognizing thousands of visual categories. However, images contain much more than categorical labels: they contain information about where objects are located (in a forest or in a kitchen?), what attributes an object has (red or blue?), and how objects interact with other objects in a scene (is the child sitting on a sofa, or running in a field?). Natural language provides an efficient and intuitive way for visual systems to convey important information about a visual scene.We begin by considering a fundamental task as the intersection of language and vision: image captioning, in which a system receives an image as input and outputs a natural language sentence that describes the image. We consider two important shortcomings in modern image captioning models. First, in order to describe an object, like "otter", captioning models require pairs of sentences and images which include the object "otter". In Chapter 2, we build models that can learn an object like "otter" from classification data, which is abundant and easy to collect, then compose novel sentences at test time describing "otter", without any "otter" image caption examples at train time. Second, visual description models can be heavily driven by biases found in the training dataset. This can lead to object hallucination in which models hallucinate objects not present in an image. In Chapter 3, we propose tools to analyze language bias through the lens of object hallucination. Language bias can also lead to bias amplification
590 ▼a School code: 0028.
650 4 ▼a Artificial intelligence.
650 4 ▼a Computer engineering.
650 4 ▼a Linguistics.
650 4 ▼a Electrical engineering.
690 ▼a 0800
690 ▼a 0544
690 ▼a 0464
690 ▼a 0290
71020 ▼a University of California, Berkeley. ▼b Electrical Engineering & Computer Sciences.
7730 ▼t Dissertations Abstracts International ▼g 81-04A.
773 ▼t Dissertation Abstract International
790 ▼a 0028
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15491501 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK