LDR | | 00000nam u2200205 4500 |
001 | | 000000435039 |
005 | | 20200227113818 |
008 | | 200131s2019 ||||||||||||||||| ||eng d |
020 | |
▼a 9781085580472 |
035 | |
▼a (MiAaPQ)AAI27525227 |
035 | |
▼a (MiAaPQ)OhioLINKosu1555568920030042 |
040 | |
▼a MiAaPQ
▼c MiAaPQ
▼d 247004 |
082 | 0 |
▼a 004 |
100 | 1 |
▼a Ma, Siyuan. |
245 | 10 |
▼a Easy and Efficient Optimization for\\\\Modern Supervised Learning. |
260 | |
▼a [S.l.]:
▼b The Ohio State University.,
▼c 2019. |
260 | 1 |
▼a Ann Arbor:
▼b ProQuest Dissertations & Theses,
▼c 2019. |
300 | |
▼a 135 p. |
500 | |
▼a Source: Dissertations Abstracts International, Volume: 81-02, Section: B. |
500 | |
▼a Advisor: Belkin, Mikhail. |
502 | 1 |
▼a Thesis (Ph.D.)--The Ohio State University, 2019. |
506 | |
▼a This item must not be sold to any third party vendors. |
520 | |
▼a In modern supervised learning, a model with excellent generalization performance is often learned by fitting certain labeled dataset using Stochastic Gradient Descent (SGD), a procedure referred as model training. One primary goal of modern machine learning research is to automate and accelerate such training procedure with strong performance guarantees. In this dissertation, we take a step toward this goal by deriving theories to analytically calculate best parameters for SGD optimization and developing a learning framework accordingly for a class of classical kernel machines. The proposed framework, which we call \extit{EigenPro}, not only automates the training procedure, but also adapts it to the parallel computing capacity of a resource, leading to significant acceleration.In the first part, we analyze the convergence of mini-batch SGD under the interpolation setting where the models are complex enough to achieve zero empirical loss. For a general class of convex losses, we show that SGD can achieve exponential convergence rate using constant step size, parallel to that for full gradient descent. For any given mini-batch size, we analytically derive the optimal constant step size to achieve the fastest convergence. We show that for efficient training, the batch size cannot be increased beyond a certain critical value, a phenomenon known as \extit{linear scaling}.For a general class of non-convex losses, we obtain similar results on the convergence of SGD using constant step size. In practice, these analytical results allow for automating the parameter selection of SGD training.In the second part of this dissertation, we develop a learning framework, \extit{EigenPro}, to automate the training procedure in a principled manner. This is also the first analytical framework that extends linear scaling to match the parallel computing capacity of a resource. The framework is designed for a class of classical kernel machines. It automatically modifies a standard kernel machine to output a mathematically equivalent prediction function, yet allowing for extended linear scaling, i.e., higher effective parallelization and faster training time on given hardware. The resulting algorithms are accurate, principled and very fast. For example, using a single Titan Xp GPU, training on ImageNet with $1.3 \imes 10. |
520 | |
▼a 6$ data points and 1000 labels takes under an hour, while smaller datasets, such as MNIST, take seconds. |
590 | |
▼a School code: 0168. |
650 | 4 |
▼a Computer science. |
690 | |
▼a 0984 |
710 | 20 |
▼a The Ohio State University.
▼b Computer Science and Engineering. |
773 | 0 |
▼t Dissertations Abstracts International
▼g 81-02B. |
773 | |
▼t Dissertation Abstract International |
790 | |
▼a 0168 |
791 | |
▼a Ph.D. |
792 | |
▼a 2019 |
793 | |
▼a English |
856 | 40 |
▼u http://www.riss.kr/pdu/ddodLink.do?id=T15494098
▼n KERIS
▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |
980 | |
▼a 202002
▼f 2020 |
990 | |
▼a ***1008102 |
991 | |
▼a E-BOOK |