MARC보기
LDR00000nam u2200205 4500
001000000433093
00520200225112114
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781687932174
035 ▼a (MiAaPQ)AAI22617411
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Hosseini, Hossein.
24510 ▼a Machine Learning in Adversarial Settings: Attacks and Defenses.
260 ▼a [S.l.]: ▼b University of Washington., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 123 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500 ▼a Advisor: Poovendran, Radha.
5021 ▼a Thesis (Ph.D.)--University of Washington, 2019.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item must not be added to any third party search indexes.
520 ▼a Deep neural networks have achieved remarkable success over the last decade in a variety of tasks. Such models are, however, typically designed and developed with the implicit assumption that they will be deployed in benign settings. With the increasing use of learning systems in security-sensitive and safety-critical application, such as banking, medical diagnosis, and autonomous cars, it is important to study and evaluate their performance in adversarial settings.The security of machine learning systems has been studied from different perspectives. Learning models are subject to attacks at both training and test phases. The main threat at test time is evasion attack, in which the attacker subtly modifies input data such that a human observer would perceive the original content, but the model generates different outputs. Such inputs, known as adversarial examples, has been used to attack voice interfaces, face-recognition systems and text classifiers.The goal of this dissertation is to investigate the test-time vulnerabilities of machine learning systems in adversarial settings and develop robust defensive mechanisms. The dissertation covers two classes of models, 1) commercial ML products developed by Google, namely Perspective, Cloud Vision, and Cloud Video Intelligence APIs, and 2) state-of-the-art image classification algorithms. In both cases, we propose novel test-time attack algorithms and also present defense methods against such attacks.
590 ▼a School code: 0250.
650 4 ▼a Computer science.
690 ▼a 0984
71020 ▼a University of Washington. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-05B.
773 ▼t Dissertation Abstract International
790 ▼a 0250
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15493462 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK