자료유형 | 학위논문 |
---|---|
서명/저자사항 | Coded Computation for Speeding Up Distributed Machine Learning. |
개인저자 | Wang, Sinong. |
단체저자명 | The Ohio State University. Electrical and Computer Engineering. |
발행사항 | [S.l.]: The Ohio State University., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 140 p. |
기본자료 저록 | Dissertations Abstracts International 81-02B. Dissertation Abstract International |
ISBN | 9781085657648 |
학위논문주기 | Thesis (Ph.D.)--The Ohio State University, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-02, Section: B.
Advisor: Shroff, Ness. |
이용제한사항 | This item must not be sold to any third party vendors. |
요약 | Large-scale machine learning has shown great promise for solving many practical applications. Such applications require massive training datasets and model parameters, and force practitioners to adopt distributed computing frameworks such as Hadoop and Spark to increase the learning speed. However, the speedup gain is far from ideal due to the latency incurred in waiting for a few slow or faulty processors, called "straggler" to complete their tasks. To alleviate this problem, current frameworks such as Hadoop deploy various straggler detection techniques and usually replicate the straggling task on another available node, which creates a large computation overhead.In this dissertation, we focus on a new and more effective technique, called coded computation to deal with stragglers in the distributed computation problems. It creates and exploits coding redundancy in local computation to enable the final output to be recoverable from the results of partially finished workers, and can therefore alleviate the impact of straggling workers. However, we observe that current coded computation techniques are not suitable for large-scale machine learning application. The reason is that the input training data exhibit both extremely large-scale targeting data and a sparse structure. However, the existing coded computation schemes destroy the sparsity and creates large computation redundancy. Thus, while these schemes reduce delays due to the stragglers in the system, they create additional delays because they end up increasing the computational load on each machine. This fact motivates us to focus on designing more efficient coded computation scheme for machine learning applications.We begin by investigating the linear transformation problem. We analyze the minimum computation load (number of redundant computations in each worker) of any coded computation scheme for this problem, and construct a code we name "diagonal code" that achieves the above minimum computation load. An important feature in this part is that we construct a new theoretical framework to relate the construction of coded computation scheme to the design of random bipartite graph that contains a perfect matching. Based on this framework, we further construct several random codes that can provide even lower computation load with high probability.We next consider a more complex problem that is also useful in a number of machine learning applications: matrix multiplication. We show that previous constructed coded computation scheme for the linear transformation problem will lead to a large decoding overhead for the matrix multiplication problem. To handle this issue, we design a new sparse code that is generated by a specifically designed degree distribution, we call "wave soliton distribution". We further design a type of hybrid decoding algorithm between peeling decoding and Gaussian elimination process, which can provide a fast decoding algorithm for this problem.Finally, we shift our focus on the distributed optimization problem for the gradient coding problem. We observe that the existing gradient coding scheme that is designed for the worst-case scenario will yield a large computation redundancy. To overcome this challenge, we propose the idea of approximate gradient coding, which aims to approximately compute the sum of functions. We analyze the minimum computation load for approximate gradient coding problem and further construct two approximate gradient coding schemes: fractional repetition code and batch raptor code that asymptotically achieves the minimum computation load. We apply our proposed scheme into a classical gradient descent algorithm in solving the logistic regression problem.These works go to illustrate the power of designing efficient codes that are tailored to solve large-scale machine learning problems. In the future research, we will focus on more complex machine learning problem such as distributed training of deep neural network and the system-level optimization of the coded computation scheme. |
일반주제명 | Computer science. Electrical engineering. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |