MARC보기
LDR00000nam u2200205 4500
001000000432311
00520200224120538
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085676977
035 ▼a (MiAaPQ)AAI13900373
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 310
1001 ▼a Wang, Lin.
24510 ▼a Space-Filling Designs and Big Data Subsampling.
260 ▼a [S.l.]: ▼b University of California, Los Angeles., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 80 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
500 ▼a Advisor: Xu, Hongquan.
5021 ▼a Thesis (Ph.D.)--University of California, Los Angeles, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Space-filling designs are commonly used in computer experiments and other scenarios for investigating complex systems, but the construction of such designs is challenging. In this thesis, we construct a series of maximin-distance Latin hypercube designs via Williams transformations of good lattice point designs. Some constructed designs are optimal under the maximin L1-distance criterion, while others are asymptotically optimal. Moreover, these designs are also shown to have small pairwise correlations between columns. The procedure is further extended to the construction of multi-level nonregular fractional factorial designs which have better properties than regular designs. Existing research on the construction of nonregular designs focuses on two-level designs. We construct a novel class of multilevel nonregular designs by permuting levels of regular designs via the Williams transformation. The constructed designs can reduce aliasing among effects without increasing the run size. They are more efficient than regular designs for studying quantitative factors. In addition, we explore the application of experimental design strategies to data-driven problems and develop a subsampling framework for big data linear regression. The subsampling procedure inherits optimality from the design matrices and therefore minimizes the mean squared error of coefficient estimations for sufficiently large data. It works especially well for the problem of label-constrained regression where a large covariate dataset is available but only a small set of labels are observable. The subsampling procedure can also be used for big data reduction where computation and storage issues are the primary concern.
590 ▼a School code: 0031.
650 4 ▼a Statistics.
690 ▼a 0463
71020 ▼a University of California, Los Angeles. ▼b Statistics 0891.
7730 ▼t Dissertations Abstracts International ▼g 81-03B.
773 ▼t Dissertation Abstract International
790 ▼a 0031
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15492175 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK