MARC보기
LDR00000nam u2200205 4500
001000000433851
00520200226105255
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781392658505
035 ▼a (MiAaPQ)AAI22589450
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 001
1001 ▼a Huang, Chien-Chin.
24510 ▼a Scalable Machine Learning Using Dataflow Graph Analysis.
260 ▼a [S.l.]: ▼b New York University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 122 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: Li, Jinyang.
5021 ▼a Thesis (Ph.D.)--New York University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a In the past decade, the abundance of computing resources and the growth of data have boosted the development of machine learning applications. Many computation frameworks, e.g., Hadoop, Spark, TensorFlow, and PyTorch, have been proposed and become widely used in the industry. However, programming large-scale machine learning applications is still challenging and requires the manual efforts of developers to achieve good performance. This thesis discusses two major issues with the existing frameworks.First, array is a popular data abstraction for machine learning computation. When parallelizing arrays to hundreds of CPU machines, it is critical to choose a good partition strategy to co-locate the computation arrays to reduce network communication. Unfortunately, existing distributed array frameworks usually use a fixed partition scheme and requires manually partitioning if another parallel strategy is used, making it less easy to develop a distributed array program. Secondly, GPU is widely used for a popular branch of machine learning applications, deep learning. Modern GPU can be orders of magnitude faster than CPU and becomes an attractive computation resource. However, the limited memory size of GPU restricts the scale of the DNN models can be run. It is desirable to have a computation framework to allow users to explore deeper and wider DNN models by leveraging the CPU memory.Modern machine learning frameworks generally adopt a dataflow-style programming paradigm. The dataflow graph of an application exposes valuable information to optimize the application. In this thesis, we present two techniques to address the above issues via dataflow graph analysis.We first design Spartan to help users parallelize distributed arrays on a CPU cluster. Spartan is a distributed array framework, built on top of a set of higher-order dataflow operators. Based on the operators, Spartan provides a collection of Numpy-like array APIs. Developers can choose the built-in array APIs or directly use the operators to construct machine learning applications. To achieve good performance for the distributed application, Spartan analyzes the communication pattern of the dataflow graph captured through the operators and applies a greedy strategy to find a good partition scheme to minimize the communication cost.To support memory-intensive deep learning applications on a single GPU, we develop SwapAdvisor, a swapping system that automatically swaps temporarily unused tensors from GPU memory to CPU memory. To minimize the communication overhead, SwapAdvisor analyzes the dataflow graph of the given DNN model and uses a custom-designed genetic algorithm to optimize the operator scheduling and memory allocation. Based on the optimized operator schedule and memory allocation, SwapAdvisor can determine what and when to swap to achieve good performance.
590 ▼a School code: 0146.
650 4 ▼a Computer science.
650 4 ▼a Artificial intelligence.
690 ▼a 0984
690 ▼a 0800
71020 ▼a New York University. ▼b Computer Science.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0146
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15493154 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK