MARC보기
LDR03100nam u200421 4500
001000000422526
00520190215170210
008181129s2018 |||||||||||||||||c||eng d
020 ▼a 9780438135536
035 ▼a (MiAaPQ)AAI10903728
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Papernot, Nicolas.
24510 ▼a Characterizing the Limits and Defenses of Machine Learning in Adversarial Settings.
260 ▼a [S.l.]: ▼b The Pennsylvania State University., ▼c 2018.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2018.
300 ▼a 178 p.
500 ▼a Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
5021 ▼a Thesis (Ph.D.)--The Pennsylvania State University, 2018.
520 ▼a Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as object recognition, autonomous systems, security diagnostics, and playing the game of Go. Machine learning is not only a new paradigm for bui
520 ▼a In this thesis, I focus my study on the integrity of ML models. Integrity refers here to the faithfulness of model predictions with respect to an expected outcome. This property is at the core of traditional machine learning evaluation, as demon
520 ▼a A large fraction of ML techniques were designed for benign execution environments. Yet, the presence of adversaries may invalidate some of these underlying assumptions by forcing a mismatch between the distributions on which the model is trained
520 ▼a I explore the space of attacks against ML integrity at test time. Given full or limited access to a trained model, I devise strategies that modify the test data to create a worst-case drift between the training and test distributions. The implic
520 ▼a Hence, my efforts to increase the robustness of models to these adversarial manipulations strive to decrease the confidence of predictions made far from the training distribution. Informed by my progress on attacks operating in the black-box thr
520 ▼a I then describe recent defensive efforts addressing these shortcomings. To this end, I introduce the Deep k-Nearest Neighbors classifier, which augments deep neural networks with an integrity check at test time. The approach compares internal re
520 ▼a This research calls for future efforts to investigate the robustness of individual layers of deep neural networks rather than treating the model as a black-box. This aligns well with the modular nature of deep neural networks, which orchestrate
590 ▼a School code: 0176.
650 4 ▼a Computer science.
690 ▼a 0984
71020 ▼a The Pennsylvania State University. ▼b Computer Science and Engineering.
7730 ▼t Dissertation Abstracts International ▼g 79-12B(E).
773 ▼t Dissertation Abstract International
790 ▼a 0176
791 ▼a Ph.D.
792 ▼a 2018
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15000703 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 201812 ▼f 2019
990 ▼a ***1012033