자료유형 | 학위논문 |
---|---|
서명/저자사항 | Efficient Higher-order Optimization for Machine Learning. |
개인저자 | Bullins, Brian Anderson. |
단체저자명 | Princeton University. Computer Science. |
발행사항 | [S.l.]: Princeton University., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 188 p. |
기본자료 저록 | Dissertations Abstracts International 81-05B. Dissertation Abstract International |
ISBN | 9781687933744 |
학위논문주기 | Thesis (Ph.D.)--Princeton University, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Advisor: Hazan, Elad. |
이용제한사항 | This item must not be sold to any third party vendors. |
요약 | In recent years, stochastic gradient descent (SGD) has taken center stage for training large-scale models in machine learning. Although some higher-order methods have improved iteration complexity in theory, the per-iteration costs render them unusable when faced with millions of parameters and training examples.In this thesis, I will present several works which enable higher-order optimization to be as scalable as first-order methods. The first method is a stochastic second-order algorithm for convex optimization, called LiSSA, which uses Hessian information to construct an unbiased Newton step in time linear in the problem dimension. To bypass the typical efficiency barriers for second-order methods, we harness the ERM structure in standard machine learning tasks.While convex problems allow for global convergence, recent state-of-the-art models, such as deep neural networks, highlight the importance of developing a better understanding of non-convex guarantees. In order to handle this challenging setting, I will present FastCubic, a Hessian-based method which provably converges to first-order critical points faster than gradient descent, while additionally converging to second-order critical points. Finally, I will establish how to leverage even higher-order derivative information by means of the FastQuartic algorithm, which achieves faster convergence for both smooth and non-smooth convex optimization problems by combining efficient tensor methods with near-optimal higher-order acceleration. |
일반주제명 | Computer science. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |