대구한의대학교 향산도서관

상세정보

부가기능

Characterizing the Limits and Defenses of Machine Learning in Adversarial Settings

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Characterizing the Limits and Defenses of Machine Learning in Adversarial Settings.
개인저자Papernot, Nicolas.
단체저자명The Pennsylvania State University. Computer Science and Engineering.
발행사항[S.l.]: The Pennsylvania State University., 2018.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2018.
형태사항178 p.
기본자료 저록Dissertation Abstracts International 79-12B(E).
Dissertation Abstract International
ISBN9780438135536
학위논문주기Thesis (Ph.D.)--The Pennsylvania State University, 2018.
일반주기 Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
요약Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as object recognition, autonomous systems, security diagnostics, and playing the game of Go. Machine learning is not only a new paradigm for bui
요약In this thesis, I focus my study on the integrity of ML models. Integrity refers here to the faithfulness of model predictions with respect to an expected outcome. This property is at the core of traditional machine learning evaluation, as demon
요약A large fraction of ML techniques were designed for benign execution environments. Yet, the presence of adversaries may invalidate some of these underlying assumptions by forcing a mismatch between the distributions on which the model is trained
요약I explore the space of attacks against ML integrity at test time. Given full or limited access to a trained model, I devise strategies that modify the test data to create a worst-case drift between the training and test distributions. The implic
요약Hence, my efforts to increase the robustness of models to these adversarial manipulations strive to decrease the confidence of predictions made far from the training distribution. Informed by my progress on attacks operating in the black-box thr
요약I then describe recent defensive efforts addressing these shortcomings. To this end, I introduce the Deep k-Nearest Neighbors classifier, which augments deep neural networks with an integrity check at test time. The approach compares internal re
요약This research calls for future efforts to investigate the robustness of individual layers of deep neural networks rather than treating the model as a black-box. This aligns well with the modular nature of deep neural networks, which orchestrate
일반주제명Computer science.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼