대구한의대학교 향산도서관

상세정보

부가기능

Machine Learning Systems for Highly-distributed and Rapidly-growing Data

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Machine Learning Systems for Highly-distributed and Rapidly-growing Data.
개인저자Hsieh, Kevin .
단체저자명Carnegie Mellon University. Electrical and Computer Engineering.
발행사항[S.l.]: Carnegie Mellon University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항132 p.
기본자료 저록Dissertations Abstracts International 81-06B.
Dissertation Abstract International
ISBN9781392825662
학위논문주기Thesis (Ph.D.)--Carnegie Mellon University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-06, Section: B.
Advisor: Gibbons, Phillip B
이용제한사항This item must not be sold to any third party vendors.
요약The usability and practicality of any machine learning (ML) applications are largely influenced by two critical but hard-to-attain factors: low latency and low cost. Unfortunately, achieving low latency and low cost is very challenging when ML depends on real-world data that are highly distributed and rapidly growing (e.g., data collected by mobile phones and video cameras all over the world). Such real-world data pose many challenges in communication and computation. For example, when training data are distributed across data centers that span multiple continents, communication among data centers can easily overwhelm the limited wide-area network bandwidth, leading to prohibitively high latency and high cost.In this dissertation, we demonstrate that the latency and cost of ML on highly-distributed and rapidly-growing data can be improved by one to two orders of magnitude by designing ML systems that exploit the characteristics of ML algorithms, ML model structures, and ML training/serving data. We support this thesis statement with three contributions. First, we design a system that provides both low-latency and low-cost ML serving (inferencing) over large-scale and continuously-growing datasets, such as videos. Second, we build a system that makes ML training over geo-distributed datasets as fast as training within a single data center. Third, we present a first detailed study and a system-level solution on a fundamental and largely overlooked problem: ML training over non-IID (i.e., not independent and identically distributed) data partitions (e.g., facial images collected by cameras varies according to the demographics of each camera's location).
일반주제명Computer engineering.
Computer science.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼