대구한의대학교 향산도서관

상세정보

부가기능

Coded Computation for Speeding Up Distributed Machine Learning

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Coded Computation for Speeding Up Distributed Machine Learning.
개인저자Wang, Sinong.
단체저자명The Ohio State University. Electrical and Computer Engineering.
발행사항[S.l.]: The Ohio State University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항140 p.
기본자료 저록Dissertations Abstracts International 81-02B.
Dissertation Abstract International
ISBN9781085657648
학위논문주기Thesis (Ph.D.)--The Ohio State University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-02, Section: B.
Advisor: Shroff, Ness.
이용제한사항This item must not be sold to any third party vendors.
요약Large-scale machine learning has shown great promise for solving many practical applications. Such applications require massive training datasets and model parameters, and force practitioners to adopt distributed computing frameworks such as Hadoop and Spark to increase the learning speed. However, the speedup gain is far from ideal due to the latency incurred in waiting for a few slow or faulty processors, called "straggler" to complete their tasks. To alleviate this problem, current frameworks such as Hadoop deploy various straggler detection techniques and usually replicate the straggling task on another available node, which creates a large computation overhead.In this dissertation, we focus on a new and more effective technique, called coded computation to deal with stragglers in the distributed computation problems. It creates and exploits coding redundancy in local computation to enable the final output to be recoverable from the results of partially finished workers, and can therefore alleviate the impact of straggling workers. However, we observe that current coded computation techniques are not suitable for large-scale machine learning application. The reason is that the input training data exhibit both extremely large-scale targeting data and a sparse structure. However, the existing coded computation schemes destroy the sparsity and creates large computation redundancy. Thus, while these schemes reduce delays due to the stragglers in the system, they create additional delays because they end up increasing the computational load on each machine. This fact motivates us to focus on designing more efficient coded computation scheme for machine learning applications.We begin by investigating the linear transformation problem. We analyze the minimum computation load (number of redundant computations in each worker) of any coded computation scheme for this problem, and construct a code we name "diagonal code" that achieves the above minimum computation load. An important feature in this part is that we construct a new theoretical framework to relate the construction of coded computation scheme to the design of random bipartite graph that contains a perfect matching. Based on this framework, we further construct several random codes that can provide even lower computation load with high probability.We next consider a more complex problem that is also useful in a number of machine learning applications: matrix multiplication. We show that previous constructed coded computation scheme for the linear transformation problem will lead to a large decoding overhead for the matrix multiplication problem. To handle this issue, we design a new sparse code that is generated by a specifically designed degree distribution, we call "wave soliton distribution". We further design a type of hybrid decoding algorithm between peeling decoding and Gaussian elimination process, which can provide a fast decoding algorithm for this problem.Finally, we shift our focus on the distributed optimization problem for the gradient coding problem. We observe that the existing gradient coding scheme that is designed for the worst-case scenario will yield a large computation redundancy. To overcome this challenge, we propose the idea of approximate gradient coding, which aims to approximately compute the sum of functions. We analyze the minimum computation load for approximate gradient coding problem and further construct two approximate gradient coding schemes: fractional repetition code and batch raptor code that asymptotically achieves the minimum computation load. We apply our proposed scheme into a classical gradient descent algorithm in solving the logistic regression problem.These works go to illustrate the power of designing efficient codes that are tailored to solve large-scale machine learning problems. In the future research, we will focus on more complex machine learning problem such as distributed training of deep neural network and the system-level optimization of the coded computation scheme.
일반주제명Computer science.
Electrical engineering.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼