MARC보기
LDR00000nam u2200205 4500
001000000435387
00520200228094252
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781392620069
035 ▼a (MiAaPQ)AAI13811173
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 001
1001 ▼a Zhao, Junbo.
24510 ▼a Unsupervised Learning with Regularized Autoencoders.
260 ▼a [S.l.]: ▼b New York University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 145 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: LeCun, Yann.
5021 ▼a Thesis (Ph.D.)--New York University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Deep learning has enjoyed remarkable successes in a variety of domains. These successes often emerge at the cost of large annotated datasets and training computationally heavy neural network models. The learning paradigm for this is called supervised learning. However, to reduce the sample complexity while improving the universality of the trained models is a crucial next step that may to artificial intelligence. Unsupervised Learning, in contrast to supervised learning, aims to build neural network models with more generic loss objectives requiring little or no labelling effort, and therefore it does not reside in any specific domain-task. In spite of the brevity of its goal, unsupervised learning is a broad topic that relates or includes several sub-fields, such as density estimation, generative modeling, world model, etc. In this thesis, we primarily adopt an energy-based view unifying these different fields (LeCun, 2006). The desired energy function reflects the data manifold by differentiating the energy assigned to the points on the data manifold against points off the manifold. Basing on this foundation, we first cast the popular autoencoder and adversarial learning framework into an energy-based perspective. Then, we propose several techniques or architectures with a motivation to improve learning the energy function in an unsupervised setting. The thesis is organized as follows. First, we list out a number of common strategies to shape a good energy function by learning. Among these, we mainly target at two strategies and extend the frontier of them. The resulted models from this thesis demonstrate several applications made possible by using no or few labeled data. It covers a wide spectrum of computer vision and language tasks, such as generation, text summarization, text style-transfer and transfer/semi-supervised learning.
590 ▼a School code: 0146.
650 4 ▼a Artificial intelligence.
690 ▼a 0800
71020 ▼a New York University. ▼b Computer Science.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0146
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15490682 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK