MARC보기
LDR00000nam u2200205 4500
001000000434246
00520200226143128
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781088397657
035 ▼a (MiAaPQ)AAI22617033
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Chen, Tianqi.
24510 ▼a Scalable and Intelligent Learning Systems.
260 ▼a [S.l.]: ▼b University of Washington., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 130 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: Guestrin, Carlos.
5021 ▼a Thesis (Ph.D.)--University of Washington, 2019.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item must not be added to any third party search indexes.
520 ▼a Data, models, and computing are the three pillars that enable machine learning to solve real-world problems at scale. Making progress on these three domains requires not only disruptive algorithmic advances but also systems innovations that can continue to squeeze more efficiency out of modern hardware. Learning systems are in the center of every intelligent application nowadays. This thesis discusses aspects of learning systems under the context of three real-world systems -- XGBoost, MXNet, and TVM.The first half of the thesis focuses on scalable learning systems that learn parameters for complex models using large-scale data. We introduce XGBoost, a scalable tree boosting system that scales to billions of examples in distributed or memory-limited settings. We then bring a systematic approach under the context of MXNet to reduce the memory consumption of training to scale up real-world deep learning workloads using a minimal amount of resources.The second half of the thesis brings intelligence to learning systems themselves. We introduce TVM, a system for deploying learning to diverse hardware platforms. TVM exposes graph-level and operator-level optimization knobs to provide performance portability to deep learning workloads across diverse hardware back-ends. We propose transfer learning methods to automate TVM and deliver performance competitive with state-of-the-art hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPU.
590 ▼a School code: 0250.
650 4 ▼a Computer science.
690 ▼a 0984
71020 ▼a University of Washington. ▼b Computer Science and Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0250
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15493439 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK