자료유형 | 학위논문 |
---|---|
서명/저자사항 | Explainable Machine Learning for Science and Medicine. |
개인저자 | Lundberg, Scott. |
단체저자명 | University of Washington. Computer Science and Engineering. |
발행사항 | [S.l.]: University of Washington., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 176 p. |
기본자료 저록 | Dissertations Abstracts International 81-05B. Dissertation Abstract International |
ISBN | 9781088327913 |
학위논문주기 | Thesis (Ph.D.)--University of Washington, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Advisor: Lee, Su-In. |
이용제한사항 | This item must not be sold to any third party vendors.This item must not be added to any third party search indexes. |
요약 | Understanding why a machine learning model made a certain prediction can be as crucial as the prediction's accuracy in many scientific and medical applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as tree-based ensembles or deep learning models. In this dissertation I present several solutions that improve our ability to explain traditional (often complex) machine learning models. Each of these solutions were developed in response to specific challenges that we faced in our application of machine learning to biology and medicine. I present solutions that enable the better interpretation of very large graphical models, and show how that can enhance our understanding of human genome regulation. I then present a unified model agnostic approach to explain the output of any machine learning model that connects game theory with local explanations, uniting many previous methods. By applying this approach to early-warning medical decision support we are able to use a complex, high accuracy model, and also provide explanations of the clinical risk factors that impacted the model's prediction. I then focus specifically on tree-based models, such as random forests and gradient boosted trees, where we have developed the first polynomial time algorithm to exactly compute classic attribution values from game theory. Based on these methods we have created a new set of tools for understanding both global model structure and individual model predictions. The associated open source software supports many modern machine learning frameworks and is widely used across many industries. |
일반주제명 | Computer science. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |