대구한의대학교 향산도서관

상세정보

부가기능

Tackling Graphical NLP Problems with Graph Recurrent Networks

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Tackling Graphical NLP Problems with Graph Recurrent Networks.
개인저자Song, Linfeng.
단체저자명University of Rochester. Hajim School of Engineering and Applied Sciences.
발행사항[S.l.]: University of Rochester., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항149 p.
기본자료 저록Dissertations Abstracts International 81-04B.
Dissertation Abstract International
ISBN9781085783392
학위논문주기Thesis (Ph.D.)--University of Rochester, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Advisor: Gildea, Daniel.
이용제한사항This item must not be sold to any third party vendors.
요약How to properly model graphs is a long-existing and important problem in natural language processing, where several popular types of graphs are knowledge graphs, semantic graphs and dependency graphs. Comparing with other data structures, such as sequences and trees, graphs are generally more powerful in representing complex correlations among entities. For example, a knowledge graph stores real-word entities (such as "Barack_Obama'' and "U.S.'') and their relations (such as "live_in'' and "lead_by'"). Properly encoding a knowledge graph is beneficial to user applications, such as question answering and knowledge discovery. Modeling graphs is also very challenging, probably because graphs usually contain massive and cyclic relations. For instance, a tree with n nodes has n - 1 edges (relations), while a complete graph with n nodes can have O(n2) edges (relations).Recent years have witnessed the success of deep learning, especially RNN-based models, on many NLP problems, including machine translation Cho et al., 2014) and question answering (Shen et al., 2017). Besides, RNNs and their variations have been extensively studied on several graph problems and showed preliminary successes. Despite the successes that have been achieved, RNN-based models suffer from several major drawbacks. First, they can only consume sequential data, thus linearization is required to serialize input graphs, resulting in the loss of important structural information. In particular, originally closely located graph nodes can be very far away after linearization, and this introduces great challenge for RNNs to model their relation. Second, the serialization results are usually very long, so it takes a long time for RNNs to encode them.In this thesis, we propose a novel graph neural network, named graph recurrent network (GRN). GRN takes a hidden state for each graph node, and it relies on an iterative message passing framework to update these hidden states in parallel. Within each iteration, neighboring nodes exchange information between each other, so that they absorb more global knowledge. Different from RNNs, which require absolute orders (such as left-to-right orders) for execution, our GRN only require relative neighboring information, making it very general and flexible on a variety of data structures.We study our GRN model on 4 very different tasks, such as machine reading comprehension, relation extraction and machine translation. Some tasks (such as machine translation) require generating sequences, while others only require one decision (classification). Some take undirected graphs without edge labels, while the others have directed ones with edge labels. To consider these important differences, we gradually enhance our GRN model, such as further considering edge labels and adding an RNN decoder. Carefully designed experiments show the effectiveness of GRN on all these tasks.
일반주제명Computer science.
Artificial intelligence.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼