대구한의대학교 향산도서관

상세정보

부가기능

Scalable Machine Learning Using Dataflow Graph Analysis

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Scalable Machine Learning Using Dataflow Graph Analysis.
개인저자Huang, Chien-Chin.
단체저자명New York University. Computer Science.
발행사항[S.l.]: New York University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항122 p.
기본자료 저록Dissertations Abstracts International 81-05B.
Dissertation Abstract International
ISBN9781392658505
학위논문주기Thesis (Ph.D.)--New York University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Advisor: Li, Jinyang.
이용제한사항This item must not be sold to any third party vendors.
요약In the past decade, the abundance of computing resources and the growth of data have boosted the development of machine learning applications. Many computation frameworks, e.g., Hadoop, Spark, TensorFlow, and PyTorch, have been proposed and become widely used in the industry. However, programming large-scale machine learning applications is still challenging and requires the manual efforts of developers to achieve good performance. This thesis discusses two major issues with the existing frameworks.First, array is a popular data abstraction for machine learning computation. When parallelizing arrays to hundreds of CPU machines, it is critical to choose a good partition strategy to co-locate the computation arrays to reduce network communication. Unfortunately, existing distributed array frameworks usually use a fixed partition scheme and requires manually partitioning if another parallel strategy is used, making it less easy to develop a distributed array program. Secondly, GPU is widely used for a popular branch of machine learning applications, deep learning. Modern GPU can be orders of magnitude faster than CPU and becomes an attractive computation resource. However, the limited memory size of GPU restricts the scale of the DNN models can be run. It is desirable to have a computation framework to allow users to explore deeper and wider DNN models by leveraging the CPU memory.Modern machine learning frameworks generally adopt a dataflow-style programming paradigm. The dataflow graph of an application exposes valuable information to optimize the application. In this thesis, we present two techniques to address the above issues via dataflow graph analysis.We first design Spartan to help users parallelize distributed arrays on a CPU cluster. Spartan is a distributed array framework, built on top of a set of higher-order dataflow operators. Based on the operators, Spartan provides a collection of Numpy-like array APIs. Developers can choose the built-in array APIs or directly use the operators to construct machine learning applications. To achieve good performance for the distributed application, Spartan analyzes the communication pattern of the dataflow graph captured through the operators and applies a greedy strategy to find a good partition scheme to minimize the communication cost.To support memory-intensive deep learning applications on a single GPU, we develop SwapAdvisor, a swapping system that automatically swaps temporarily unused tensors from GPU memory to CPU memory. To minimize the communication overhead, SwapAdvisor analyzes the dataflow graph of the given DNN model and uses a custom-designed genetic algorithm to optimize the operator scheduling and memory allocation. Based on the optimized operator schedule and memory allocation, SwapAdvisor can determine what and when to swap to achieve good performance.
일반주제명Computer science.
Artificial intelligence.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼