자료유형 | 학위논문 |
---|---|
서명/저자사항 | Learning Structured Information from Language. |
개인저자 | Katiyar, Arzoo. |
단체저자명 | Cornell University. Computer Science. |
발행사항 | [S.l.]: Cornell University., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 148 p. |
기본자료 저록 | Dissertations Abstracts International 81-04B. Dissertation Abstract International |
ISBN | 9781088384176 |
학위논문주기 | Thesis (Ph.D.)--Cornell University, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Advisor: Cardie, Claire. |
이용제한사항 | This item must not be sold to any third party vendors. |
요약 | Extracting information from text entails deriving a structured, and typically domain-specific, representation of entities and relations from unstructured text. The information thus extracted can potentially facilitate applications such as question answering, information retrieval, conversational dialogue and opinion analysis. However, extracting information from text in a structured form is difficult: it requires understanding words and the relations that exist between them in the context of both the current sentence and the document as a whole.In this thesis, we present our research on neural models that learn structured output representations comprised of textual mentions of entities and relations within a sentence. In particular, we propose the use of novel output representations that allow the neural models to learn better dependencies in the output structure and achieve state-of-the-art performance on both tasks. We also propose models which can learn nested variation of the problem of entity mentions and achieves state-of-the-art performance. We also present our recent work on expanding the input context beyond sentences by incorporating coreference resolution to learn entity-level rather than mention-level representations and show that these representations are important for improving relation extraction. We perform analysis to show that the entity-level representations which capture the information regarding the saliency of entities in the document are beneficial for relation extraction. We also briefly mention about incorporating biases into the neural network models and show improvements in the performance of information extraction. |
일반주제명 | Computer science. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |