MARC보기
LDR00000nam u2200205 4500
001000000435454
00520200228095529
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085643016
035 ▼a (MiAaPQ)AAI13884002
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 310
1001 ▼a Erickson, Collin B.
24510 ▼a Adaptive Computer Experiments for Metamodeling.
260 ▼a [S.l.]: ▼b Northwestern University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 173 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
500 ▼a Advisor: Ankenman, Bruce E.
5021 ▼a Thesis (Ph.D.)--Northwestern University, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Computer simulation experiments are commonly used as an inexpensive alternative to real-world experiments to form a metamodel that approximates the input-output relationship of the real-world experiment. The metamodel can be useful for decision making and making predictions for inputs that have not been evaluated yet since it can be evaluated much faster than the actual simulation. The two main components of computer experiments are choosing which input points to evaluate and building a statistical model, called a metamodel, using the data that can be used to approximate the simulation output. In this dissertation, we study three problems in computer experiments.First, we investigate Gaussian process models, one of the most commonly used types of metamodel. We find that, despite implementing nearly the same model, different software implementations fit to the same data can provide very different predictions. The difference in time it takes to fit each model can also vary by orders of magnitude across the software implementations.Second, we propose a new algorithm for running sequential computer experiments when the user wants to have better prediction accuracy in regions where the simulation output varies the most. In sequential experiments, the data is gathered in batches, and data from previous batches can help inform the choice of which points to select in following batches. We assert that practitioners often have a goal of fitting the entire surface reasonably well, but want to have better prediction accuracy in regions that are more interesting to them. This goal can be achieved by changing the criterion that is used each iteration to choose which points to evaluate next.Third, we devise a new algorithm for adaptive computer experiments that allows for the construction of a metamodel using large amounts of data. Gaussian process models are infeasible for more than a couple thousand points because of computational demands. Using the sparse grid designs of Plumlee [2014], Gaussian process inference can be done on over 100,000 points. We build upon this work to allow for data to be added adaptively in order to focus simulation effort in the input dimensions that are harder to predict.
590 ▼a School code: 0163.
650 4 ▼a Industrial engineering.
650 4 ▼a Statistics.
690 ▼a 0546
690 ▼a 0463
71020 ▼a Northwestern University. ▼b Industrial Engineering and Management Sciences.
7730 ▼t Dissertations Abstracts International ▼g 81-03B.
773 ▼t Dissertation Abstract International
790 ▼a 0163
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15491335 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK