MARC보기
LDR00000nam u2200205 4500
001000000433423
00520200225141323
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085772532
035 ▼a (MiAaPQ)AAI13884807
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 621.3
1001 ▼a Liu, Jing.
24510 ▼a Robust PCA and Robust Linear Regression via Sparsity Regularization.
260 ▼a [S.l.]: ▼b University of California, San Diego., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 164 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
500 ▼a Includes supplementary digital materials.
500 ▼a Advisor: Rao, Bhaskar D.
5021 ▼a Thesis (Ph.D.)--University of California, San Diego, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Robustness to outliers is of paramount importance in data analytics. However, many data analysis tools are not robust to outliers due to their criterion of minimizing the sum of squared errors. One essential characteristic of the outliers is that they are sparse. A significant contribution of this thesis is the development of a novel framework that directly uses genuine L0-'norm' to enforce the sparseness of the outliers, while uses L1-norm to address the inlier noise, and development of algorithms with better recovery guarantees than the state-of-the-art L1 relaxation approach.We first study this framework in the Robust Linear Regression setting and propose an Algorithm for Robust Outlier Support Identification (AROSI) to minimize a novel objective function. The proposed algorithm is guaranteed to converge in a finite number of iterations to a local optimum. Under certain conditions, AROSI is guaranteed to have exact recovery when only sparse outliers are present. Furthermore, the estimation error is bounded when there is dense inlier noise as well. It can also identify the outliers without any false alarm.Then, we study this framework in the Robust Principal Component Analysis (PCA) setting and propose a novel objective that additionally uses nuclear norm to capture the low-rank matrix. The associated algorithm, termed Sparsity Regularized Principal Component Pursuit (SRPCP), is shown to converge in a finite number of iterations to a local optimum. Under certain conditions, SRPCP is guaranteed to have exact recovery in the presence of sparse outliers only, and bounded error in the noisy case. It can also identify the outliers without any false alarm. An important byproduct of our analysis is the result that, the widely used Principal Component Pursuit (PCP) method and its missing entry version are actually stable to dense inlier noise. We further propose an Iterative Reweighted SRPCP method that uses log-determinant to capture the low-rank matrix instead, which also converges and achieves even better performance.To better enforce the low-rankness, we transform the Robust PCA objective into a novel Robust Sparse Linear Regression objective with equivalent global optima guarantee. Then we propose a concise Sparse Bayesian Learning method to solve this new objective, and the method is shown to encourage the solution to be low-rank and the outliers to be sparse. To further utilize the sparsity pattern information of the outliers in the Robust PCA problem, a modification of the above Bayesian method is proposed and analyzed. Empirical studies demonstrate the superiority of the proposed methods over existing state-of-the-art methods.
590 ▼a School code: 0033.
650 4 ▼a Electrical engineering.
690 ▼a 0544
71020 ▼a University of California, San Diego. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-04B.
773 ▼t Dissertation Abstract International
790 ▼a 0033
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15491396 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK