MARC보기
LDR00000nam u2200205 4500
001000000432226
00520200224115133
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781088329672
035 ▼a (MiAaPQ)AAI13902948
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 610
1001 ▼a Powers, Thomas.
24510 ▼a Differentiable and Robust Optimization Algorithms.
260 ▼a [S.l.]: ▼b University of Washington., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 86 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
500 ▼a Advisor: Atlas, Les E.
5021 ▼a Thesis (Ph.D.)--University of Washington, 2019.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item must not be added to any third party search indexes.
520 ▼a Imposing appropriate structure or constraints onto optimization problems is often the key to deriving guarantees or improving generalization of performance aspects like generalization or interpretability. The main contribution of this dissertation is developing algorithms that can leverage underlying submodular or sparse structure to do robust optimization: unfolded discrete and continuous optimization algorithms and robust submodular optimization. While deep neural networks (DNNs) continue to advance the state-of-the-art for many tasks infields such as speech and audio processing, natural language processing, and computer vision over traditional statistical generative models, many of the most popular architectures are difficult to analyze and require large amounts of data to achieve such good performance. The choice between DNN and generative model need not be binary, however. Using a technique called deep unfolding, inference algorithms for generative models can be turned into DNNs and trained discriminatively. Such models can also leverage sparsity, submodularity, or other regularization frameworks while still being able to make use of domain knowledge integrated into the underlying model. Subset selection problems are important for many applications in machine learning such as feature and data selection, dictionary learning, compression,and sparse recovery. While these problems are generally NP-hard, if the objective functionis submodular (or in some cases has submodular structure), a near-optimal solution can be found in polynomial-time. Deep learning and subset selection have overlap when sparse models are used. For DNNs and other continuous models, sparsity is typically induced via penalty function such as the \uD835\uDCDB1 norm, whereas sparsity is induced via a hard cardinality constraint for subset selection algorithms. This dissertation explores algorithms around the intersection of subset selection and deep learning. roadly, there are two sets of algorithms presented in this dissertation. In the first, algorithms are designed to approximately solve both submodular and non-submodular optimization problems. These algorithms are applied to sensor scheduling and placement problems. In the second, deep unfolding is used to turn inference algorithms into DNNs. These unfolded models have a number of advantages over conventional DNN models, and are shown to have competitive or improved performance on a variety of tasks, can have principled initializations, and tend to need less data compared to conventional network architectures.
590 ▼a School code: 0250.
650 4 ▼a Electrical engineering.
650 4 ▼a Artificial intelligence.
650 4 ▼a Algorithms.
650 4 ▼a Optimization.
650 4 ▼a Deep learning.
650 4 ▼a Sparsity.
650 4 ▼a Neural networks.
690 ▼a 0544
690 ▼a 0800
71020 ▼a University of Washington. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-04B.
773 ▼t Dissertation Abstract International
790 ▼a 0250
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15492413 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK