대구한의대학교 향산도서관

상세정보

부가기능

Code Optimization on GPUs

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Code Optimization on GPUs.
개인저자Hong, Changwan.
단체저자명The Ohio State University. Computer Science and Engineering.
발행사항[S.l.]: The Ohio State University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항198 p.
기본자료 저록Dissertations Abstracts International 81-05B.
Dissertation Abstract International
ISBN9781392825303
학위논문주기Thesis (Ph.D.)--The Ohio State University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Advisor: Sadayappan, Ponnuswamy.
이용제한사항This item must not be sold to any third party vendors.
요약Graphic Processing Units (GPUs) have become popular in the last decade due to their high memory bandwidth and powerful computing capacity. Nevertheless, achieving high-performance on GPUs is not trivial. It generally requires significant programming expertise and understanding of details of low-level execution mechanisms in GPUs. This dissertation introduces approaches for optimizing regular and irregular applications. To optimize regular applications, it introduces a novel approach to GPU kernel optimization by identifying and alleviating bottleneck resources. This approach, however, is not effective in irregular applications because of data-dependent branches and memory accesses. Hence, tailored approaches are developed for two popular domains of irregular applications: graph algorithms and sparse matrix primitives. Performance modeling for GPUs is carried out by abstract kernel emulation along with latency/gap modeling of resources. Sensitivity analysis with respect to resource latency/gap parameters is used to predict the bottleneck resource for a given kernel's execution. The utility of the bottleneck analysis is demonstrated in two contexts: i) Enhancing the OpenTuner auto-tuner with the new bottleneck-driven optimization strategy. Effectiveness is demonstrated by experimental results on all kernels from the Rodinia suite and GPU tensor contraction kernels from the NWChem computational chemistry suite. ii) Manual code optimization. Two case studies illustrate the use of a bottleneck analysis to iteratively improve the performance of code from state-of-the-art DSL code generators. However, the above approach is ineffective for irregular applications such as graph algorithms and sparse linear systems. Graph algorithms are used in various applications, and high-level GPU graph processing frameworks are an attractive alternative for achieving both high productivity and high-performance. This dissertation develops an approach to graph processing on GPUs that seeks to overcome some of the performance limitations of existing frameworks. It uses multiple data representations and execution strategies for dense- versus sparse vertex frontiers, dependent on the fraction of active graph vertices. Experimental results demonstrate performance improvement over current state-of-the-art GPU graph processing frameworks for many benchmark programs and data sets. Sparse matrix primitves such as sparse matrix vector multiplication (SpMV), sparse matrix multi-vector multiplication (SpMM), and Sampled Dense-dense matrix multiplication (SDDMM), are key kernels for scientific computing as well as data science and machine learning. A large number of recent research studies have focused on various GPU implementations of the SpMV kernel. But SpMM and SDDMM kernels have received much less attention. This dissertation presents in-depth analyses to contrast SpMV and SpMM, and develops new sparse-matrix representations and computation approaches suited to achieving high data-movement efficiency and effective GPU parallelization of SpMM. It also introduces a novel tiling approach for high-performance implementations for SpMM and SDDMM with the standard sparse matrix representation -- Compressed Sparse Row (CSR). Experimental evaluation demonstrates performance improvement over existing implementations. In short, this dissertation contributes to enhancing compiler technology to achieve high-performance on GPUs by mainly considering data locality and concurrency.
일반주제명Computer science.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼