MARC보기
LDR00000nam u2200205 4500
001000000433933
00520200226134246
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781088384176
035 ▼a (MiAaPQ)AAI22616059
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Katiyar, Arzoo.
24510 ▼a Learning Structured Information from Language.
260 ▼a [S.l.]: ▼b Cornell University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 148 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
500 ▼a Advisor: Cardie, Claire.
5021 ▼a Thesis (Ph.D.)--Cornell University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Extracting information from text entails deriving a structured, and typically domain-specific, representation of entities and relations from unstructured text. The information thus extracted can potentially facilitate applications such as question answering, information retrieval, conversational dialogue and opinion analysis. However, extracting information from text in a structured form is difficult: it requires understanding words and the relations that exist between them in the context of both the current sentence and the document as a whole.In this thesis, we present our research on neural models that learn structured output representations comprised of textual mentions of entities and relations within a sentence. In particular, we propose the use of novel output representations that allow the neural models to learn better dependencies in the output structure and achieve state-of-the-art performance on both tasks. We also propose models which can learn nested variation of the problem of entity mentions and achieves state-of-the-art performance. We also present our recent work on expanding the input context beyond sentences by incorporating coreference resolution to learn entity-level rather than mention-level representations and show that these representations are important for improving relation extraction. We perform analysis to show that the entity-level representations which capture the information regarding the saliency of entities in the document are beneficial for relation extraction. We also briefly mention about incorporating biases into the neural network models and show improvements in the performance of information extraction.
590 ▼a School code: 0058.
650 4 ▼a Computer science.
690 ▼a 0984
71020 ▼a Cornell University. ▼b Computer Science.
7730 ▼t Dissertations Abstracts International ▼g 81-04B.
773 ▼t Dissertation Abstract International
790 ▼a 0058
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15493357 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK