대구한의대학교 향산도서관

상세정보

부가기능

Easy and Efficient Optimization for\\\\Modern Supervised Learning

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Easy and Efficient Optimization for\\\\Modern Supervised Learning.
개인저자Ma, Siyuan.
단체저자명The Ohio State University. Computer Science and Engineering.
발행사항[S.l.]: The Ohio State University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항135 p.
기본자료 저록Dissertations Abstracts International 81-02B.
Dissertation Abstract International
ISBN9781085580472
학위논문주기Thesis (Ph.D.)--The Ohio State University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-02, Section: B.
Advisor: Belkin, Mikhail.
이용제한사항This item must not be sold to any third party vendors.
요약In modern supervised learning, a model with excellent generalization performance is often learned by fitting certain labeled dataset using Stochastic Gradient Descent (SGD), a procedure referred as model training. One primary goal of modern machine learning research is to automate and accelerate such training procedure with strong performance guarantees. In this dissertation, we take a step toward this goal by deriving theories to analytically calculate best parameters for SGD optimization and developing a learning framework accordingly for a class of classical kernel machines. The proposed framework, which we call \extit{EigenPro}, not only automates the training procedure, but also adapts it to the parallel computing capacity of a resource, leading to significant acceleration.In the first part, we analyze the convergence of mini-batch SGD under the interpolation setting where the models are complex enough to achieve zero empirical loss. For a general class of convex losses, we show that SGD can achieve exponential convergence rate using constant step size, parallel to that for full gradient descent. For any given mini-batch size, we analytically derive the optimal constant step size to achieve the fastest convergence. We show that for efficient training, the batch size cannot be increased beyond a certain critical value, a phenomenon known as \extit{linear scaling}.For a general class of non-convex losses, we obtain similar results on the convergence of SGD using constant step size. In practice, these analytical results allow for automating the parameter selection of SGD training.In the second part of this dissertation, we develop a learning framework, \extit{EigenPro}, to automate the training procedure in a principled manner. This is also the first analytical framework that extends linear scaling to match the parallel computing capacity of a resource. The framework is designed for a class of classical kernel machines. It automatically modifies a standard kernel machine to output a mathematically equivalent prediction function, yet allowing for extended linear scaling, i.e., higher effective parallelization and faster training time on given hardware. The resulting algorithms are accurate, principled and very fast. For example, using a single Titan Xp GPU, training on ImageNet with $1.3 \imes 10.
요약6$ data points and 1000 labels takes under an hour, while smaller datasets, such as MNIST, take seconds.
일반주제명Computer science.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼