MARC보기
LDR00000nam u2200205 4500
001000000433665
00520200225152307
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781687933744
035 ▼a (MiAaPQ)AAI22588734
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Bullins, Brian Anderson.
24510 ▼a Efficient Higher-order Optimization for Machine Learning.
260 ▼a [S.l.]: ▼b Princeton University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 188 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: Hazan, Elad.
5021 ▼a Thesis (Ph.D.)--Princeton University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a In recent years, stochastic gradient descent (SGD) has taken center stage for training large-scale models in machine learning. Although some higher-order methods have improved iteration complexity in theory, the per-iteration costs render them unusable when faced with millions of parameters and training examples.In this thesis, I will present several works which enable higher-order optimization to be as scalable as first-order methods. The first method is a stochastic second-order algorithm for convex optimization, called LiSSA, which uses Hessian information to construct an unbiased Newton step in time linear in the problem dimension. To bypass the typical efficiency barriers for second-order methods, we harness the ERM structure in standard machine learning tasks.While convex problems allow for global convergence, recent state-of-the-art models, such as deep neural networks, highlight the importance of developing a better understanding of non-convex guarantees. In order to handle this challenging setting, I will present FastCubic, a Hessian-based method which provably converges to first-order critical points faster than gradient descent, while additionally converging to second-order critical points. Finally, I will establish how to leverage even higher-order derivative information by means of the FastQuartic algorithm, which achieves faster convergence for both smooth and non-smooth convex optimization problems by combining efficient tensor methods with near-optimal higher-order acceleration.
590 ▼a School code: 0181.
650 4 ▼a Computer science.
690 ▼a 0984
71020 ▼a Princeton University. ▼b Computer Science.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0181
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15493118 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK