대구한의대학교 향산도서관

상세정보

부가기능

Machine Learning in Adversarial Settings: Attacks and Defenses

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Machine Learning in Adversarial Settings: Attacks and Defenses.
개인저자Hosseini, Hossein.
단체저자명University of Washington. Electrical and Computer Engineering.
발행사항[S.l.]: University of Washington., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항123 p.
기본자료 저록Dissertations Abstracts International 81-05B.
Dissertation Abstract International
ISBN9781687932174
학위논문주기Thesis (Ph.D.)--University of Washington, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Advisor: Poovendran, Radha.
이용제한사항This item must not be sold to any third party vendors.This item must not be added to any third party search indexes.
요약Deep neural networks have achieved remarkable success over the last decade in a variety of tasks. Such models are, however, typically designed and developed with the implicit assumption that they will be deployed in benign settings. With the increasing use of learning systems in security-sensitive and safety-critical application, such as banking, medical diagnosis, and autonomous cars, it is important to study and evaluate their performance in adversarial settings.The security of machine learning systems has been studied from different perspectives. Learning models are subject to attacks at both training and test phases. The main threat at test time is evasion attack, in which the attacker subtly modifies input data such that a human observer would perceive the original content, but the model generates different outputs. Such inputs, known as adversarial examples, has been used to attack voice interfaces, face-recognition systems and text classifiers.The goal of this dissertation is to investigate the test-time vulnerabilities of machine learning systems in adversarial settings and develop robust defensive mechanisms. The dissertation covers two classes of models, 1) commercial ML products developed by Google, namely Perspective, Cloud Vision, and Cloud Video Intelligence APIs, and 2) state-of-the-art image classification algorithms. In both cases, we propose novel test-time attack algorithms and also present defense methods against such attacks.
일반주제명Computer science.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼