MARC보기
LDR00000nam u2200205 4500
001000000433608
00520200225145755
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085779081
035 ▼a (MiAaPQ)AAI13814316
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 400
1001 ▼a Koncel-Kedziorski, Richard.
24510 ▼a Understanding and Generating Multi-Sentence Texts.
260 ▼a [S.l.]: ▼b University of Washington., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 107 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: A.
500 ▼a Advisor: Hajishirzi, Hannaneh
5021 ▼a Thesis (Ph.D.)--University of Washington, 2019.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item must not be added to any third party search indexes.
520 ▼a English is often found in units comprised of multiple sentences, but synthesizing information across sentence boundaries, whether for understanding or generation, is a difficult challenge for natural language processing algorithms. Techniques for such synthesis, however, are necessary for improved language understanding and have the potential to transform downstream applications including dialog systems, question answering, and educational technologies. In this thesis, I investigate techniques for understanding and generating multi-sentence natural language texts.As a first step toward general cross-sentence reasoning, I will describe a model for solving open-world math word problems.The model treats word problem texts as semantically-enhanced equation trees using a recursive semantic structure of quantities and generates possible solutions with an integer linear programming approach. Local and global information from the text is combined, and the model learns from data how to select the maximum scoring tree to answer each problem. Continuing in the math word problem domain, I present an editing method for automatically customizing math word problems to meet thematic constraints. This technique preserves the complex document structure of human authored text, editing in a globally coherent and syntactically informed way. Additionally, this model improves on previous thematic generation approaches by automatically building an understanding of theme from an arbitrary text. Reusing the existing syntactic and semantic relationships of the human authored text to preserve its mathematical meaning, my method can produce novel and coherent thematic word problems in English.Finally, I outline a model for generating multi-sentence texts from knowledge graphs using an innovative neural encoding, and provide evidence that knowledge graphs can help structure the generation of longer English texts. My novel graph transforming encoder extends the recent transformer model for text encoding to graph-structured inputs.The overall model learns to encode the input graph and output text in an end to end fashion. Human and automatic evaluation show that relational knowledge improves generated text.
590 ▼a School code: 0250.
650 4 ▼a Computer science.
650 4 ▼a Linguistics.
650 4 ▼a Language.
690 ▼a 0984
690 ▼a 0290
690 ▼a 0679
71020 ▼a University of Washington. ▼b Linguistics.
7730 ▼t Dissertations Abstracts International ▼g 81-04A.
773 ▼t Dissertation Abstract International
790 ▼a 0250
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15490787 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK