자료유형 | 학위논문 |
---|---|
서명/저자사항 | Differentiable and Robust Optimization Algorithms. |
개인저자 | Powers, Thomas. |
단체저자명 | University of Washington. Electrical and Computer Engineering. |
발행사항 | [S.l.]: University of Washington., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 86 p. |
기본자료 저록 | Dissertations Abstracts International 81-04B. Dissertation Abstract International |
ISBN | 9781088329672 |
학위논문주기 | Thesis (Ph.D.)--University of Washington, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Advisor: Atlas, Les E. |
이용제한사항 | This item must not be sold to any third party vendors.This item must not be added to any third party search indexes. |
요약 | Imposing appropriate structure or constraints onto optimization problems is often the key to deriving guarantees or improving generalization of performance aspects like generalization or interpretability. The main contribution of this dissertation is developing algorithms that can leverage underlying submodular or sparse structure to do robust optimization: unfolded discrete and continuous optimization algorithms and robust submodular optimization. While deep neural networks (DNNs) continue to advance the state-of-the-art for many tasks infields such as speech and audio processing, natural language processing, and computer vision over traditional statistical generative models, many of the most popular architectures are difficult to analyze and require large amounts of data to achieve such good performance. The choice between DNN and generative model need not be binary, however. Using a technique called deep unfolding, inference algorithms for generative models can be turned into DNNs and trained discriminatively. Such models can also leverage sparsity, submodularity, or other regularization frameworks while still being able to make use of domain knowledge integrated into the underlying model. Subset selection problems are important for many applications in machine learning such as feature and data selection, dictionary learning, compression,and sparse recovery. While these problems are generally NP-hard, if the objective functionis submodular (or in some cases has submodular structure), a near-optimal solution can be found in polynomial-time. Deep learning and subset selection have overlap when sparse models are used. For DNNs and other continuous models, sparsity is typically induced via penalty function such as the \uD835\uDCDB1 norm, whereas sparsity is induced via a hard cardinality constraint for subset selection algorithms. This dissertation explores algorithms around the intersection of subset selection and deep learning. roadly, there are two sets of algorithms presented in this dissertation. In the first, algorithms are designed to approximately solve both submodular and non-submodular optimization problems. These algorithms are applied to sensor scheduling and placement problems. In the second, deep unfolding is used to turn inference algorithms into DNNs. These unfolded models have a number of advantages over conventional DNN models, and are shown to have competitive or improved performance on a variety of tasks, can have principled initializations, and tend to need less data compared to conventional network architectures. |
일반주제명 | Electrical engineering. Artificial intelligence. Algorithms. Optimization. Deep learning. Sparsity. Neural networks. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |