자료유형 | 학위논문 |
---|---|
서명/저자사항 | Easy and Efficient Optimization for\\\\Modern Supervised Learning. |
개인저자 | Ma, Siyuan. |
단체저자명 | The Ohio State University. Computer Science and Engineering. |
발행사항 | [S.l.]: The Ohio State University., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 135 p. |
기본자료 저록 | Dissertations Abstracts International 81-02B. Dissertation Abstract International |
ISBN | 9781085580472 |
학위논문주기 | Thesis (Ph.D.)--The Ohio State University, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-02, Section: B.
Advisor: Belkin, Mikhail. |
이용제한사항 | This item must not be sold to any third party vendors. |
요약 | In modern supervised learning, a model with excellent generalization performance is often learned by fitting certain labeled dataset using Stochastic Gradient Descent (SGD), a procedure referred as model training. One primary goal of modern machine learning research is to automate and accelerate such training procedure with strong performance guarantees. In this dissertation, we take a step toward this goal by deriving theories to analytically calculate best parameters for SGD optimization and developing a learning framework accordingly for a class of classical kernel machines. The proposed framework, which we call \extit{EigenPro}, not only automates the training procedure, but also adapts it to the parallel computing capacity of a resource, leading to significant acceleration.In the first part, we analyze the convergence of mini-batch SGD under the interpolation setting where the models are complex enough to achieve zero empirical loss. For a general class of convex losses, we show that SGD can achieve exponential convergence rate using constant step size, parallel to that for full gradient descent. For any given mini-batch size, we analytically derive the optimal constant step size to achieve the fastest convergence. We show that for efficient training, the batch size cannot be increased beyond a certain critical value, a phenomenon known as \extit{linear scaling}.For a general class of non-convex losses, we obtain similar results on the convergence of SGD using constant step size. In practice, these analytical results allow for automating the parameter selection of SGD training.In the second part of this dissertation, we develop a learning framework, \extit{EigenPro}, to automate the training procedure in a principled manner. This is also the first analytical framework that extends linear scaling to match the parallel computing capacity of a resource. The framework is designed for a class of classical kernel machines. It automatically modifies a standard kernel machine to output a mathematically equivalent prediction function, yet allowing for extended linear scaling, i.e., higher effective parallelization and faster training time on given hardware. The resulting algorithms are accurate, principled and very fast. For example, using a single Titan Xp GPU, training on ImageNet with $1.3 \imes 10. |
요약 | 6$ data points and 1000 labels takes under an hour, while smaller datasets, such as MNIST, take seconds. |
일반주제명 | Computer science. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |