자료유형 | 학위논문 |
---|---|
서명/저자사항 | Methods for Improving Natural Language Processing Techniques with Linguistic Regularities Extracted from Large Unlabeled Text Corpora. |
개인저자 | Lucas, Michael Ryan. |
단체저자명 | Northwestern University. Computer Science. |
발행사항 | [S.l.]: Northwestern University., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 117 p. |
기본자료 저록 | Dissertations Abstracts International 81-04B. Dissertation Abstract International |
ISBN | 9781088320839 |
학위논문주기 | Thesis (Ph.D.)--Northwestern University, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Advisor: Downey, Doug. |
이용제한사항 | This item must not be sold to any third party vendors.This item must not be added to any third party search indexes. |
요약 | Natural Language Processing methods have become increasingly important for a variety of high- and low-level tasks including speech recognition, question answering, and automatic language translation. The state of the art performance of these methods is continuously advancing, but reliance on labeled training data sets often creates an artificial upper bound on performance due to the limited availability of labeled data, especially in settings where annotations by human experts are expensive to acquire. In comparison, unlabeled text data is constantly generated by Internet users around the world and at scale this data can provide critical insights into human language.This work contributes two novel methods of extracting insights from large unlabeled text corpora in order to improve the performance of machine learning models. The first contribution is an improvement to the decades-old Multinomial Naive Bayes classifier (MNB). Our method utilizes word frequencies over a large unlabeled text corpus to improve MNB's underlying conditional probability estimates and subsequently achieve state-of-the-art performance in the semi-supervised setting. The second contribution is a novel neural network method capable of simultaneous generation of multi-sense word embeddings and word sense disambiguation that does not rely on a sense-disambiguated training corpus or previous knowledge of word meanings. In both cases, our models illustrate the benefit of how modern machine learning approaches can benefit from the disciplined integration of large text corpora, which are constantly growing and only becoming cheaper to collect as technology advances. |
일반주제명 | Artificial intelligence. Computer science. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |