MARC보기
LDR00000nam u2200205 4500
001000000435121
00520200227115203
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085657648
035 ▼a (MiAaPQ)AAI27534833
035 ▼a (MiAaPQ)OhioLINKosu1555336880521062
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 621.3
1001 ▼a Wang, Sinong.
24510 ▼a Coded Computation for Speeding Up Distributed Machine Learning.
260 ▼a [S.l.]: ▼b The Ohio State University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 140 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-02, Section: B.
500 ▼a Advisor: Shroff, Ness.
5021 ▼a Thesis (Ph.D.)--The Ohio State University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Large-scale machine learning has shown great promise for solving many practical applications. Such applications require massive training datasets and model parameters, and force practitioners to adopt distributed computing frameworks such as Hadoop and Spark to increase the learning speed. However, the speedup gain is far from ideal due to the latency incurred in waiting for a few slow or faulty processors, called "straggler" to complete their tasks. To alleviate this problem, current frameworks such as Hadoop deploy various straggler detection techniques and usually replicate the straggling task on another available node, which creates a large computation overhead.In this dissertation, we focus on a new and more effective technique, called coded computation to deal with stragglers in the distributed computation problems. It creates and exploits coding redundancy in local computation to enable the final output to be recoverable from the results of partially finished workers, and can therefore alleviate the impact of straggling workers. However, we observe that current coded computation techniques are not suitable for large-scale machine learning application. The reason is that the input training data exhibit both extremely large-scale targeting data and a sparse structure. However, the existing coded computation schemes destroy the sparsity and creates large computation redundancy. Thus, while these schemes reduce delays due to the stragglers in the system, they create additional delays because they end up increasing the computational load on each machine. This fact motivates us to focus on designing more efficient coded computation scheme for machine learning applications.We begin by investigating the linear transformation problem. We analyze the minimum computation load (number of redundant computations in each worker) of any coded computation scheme for this problem, and construct a code we name "diagonal code" that achieves the above minimum computation load. An important feature in this part is that we construct a new theoretical framework to relate the construction of coded computation scheme to the design of random bipartite graph that contains a perfect matching. Based on this framework, we further construct several random codes that can provide even lower computation load with high probability.We next consider a more complex problem that is also useful in a number of machine learning applications: matrix multiplication. We show that previous constructed coded computation scheme for the linear transformation problem will lead to a large decoding overhead for the matrix multiplication problem. To handle this issue, we design a new sparse code that is generated by a specifically designed degree distribution, we call "wave soliton distribution". We further design a type of hybrid decoding algorithm between peeling decoding and Gaussian elimination process, which can provide a fast decoding algorithm for this problem.Finally, we shift our focus on the distributed optimization problem for the gradient coding problem. We observe that the existing gradient coding scheme that is designed for the worst-case scenario will yield a large computation redundancy. To overcome this challenge, we propose the idea of approximate gradient coding, which aims to approximately compute the sum of functions. We analyze the minimum computation load for approximate gradient coding problem and further construct two approximate gradient coding schemes: fractional repetition code and batch raptor code that asymptotically achieves the minimum computation load. We apply our proposed scheme into a classical gradient descent algorithm in solving the logistic regression problem.These works go to illustrate the power of designing efficient codes that are tailored to solve large-scale machine learning problems. In the future research, we will focus on more complex machine learning problem such as distributed training of deep neural network and the system-level optimization of the coded computation scheme.
590 ▼a School code: 0168.
650 4 ▼a Computer science.
650 4 ▼a Electrical engineering.
690 ▼a 0984
690 ▼a 0544
71020 ▼a The Ohio State University. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-02B.
773 ▼t Dissertation Abstract International
790 ▼a 0168
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15494165 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK