LDR | | 00000nam u2200205 4500 |
001 | | 000000435443 |
005 | | 20200228095327 |
008 | | 200131s2019 ||||||||||||||||| ||eng d |
020 | |
▼a 9781085789288 |
035 | |
▼a (MiAaPQ)AAI13811281 |
040 | |
▼a MiAaPQ
▼c MiAaPQ
▼d 247004 |
082 | 0 |
▼a 370 |
100 | 1 |
▼a Tingir, Seyfullah. |
245 | 10 |
▼a Evaluating the Effectiveness of the Expectation-Maximization (EM) Algorithm for Bayesian Network Calibration. |
260 | |
▼a [S.l.]:
▼b The Florida State University.,
▼c 2019. |
260 | 1 |
▼a Ann Arbor:
▼b ProQuest Dissertations & Theses,
▼c 2019. |
300 | |
▼a 85 p. |
500 | |
▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: A. |
500 | |
▼a Advisor: Almond, Russell. |
502 | 1 |
▼a Thesis (Ph.D.)--The Florida State University, 2019. |
506 | |
▼a This item must not be sold to any third party vendors. |
520 | |
▼a Educators use various statistical techniques to explain relationships between latent and observable variables. One way to model these relationships is to use Bayesian networks as a scoring model. However, adjusting the conditional probability tables (CPT-parameters) to fit a set of observations is still a challenge when using Bayesian networks. A CPT provides the conditional probabilities of a single discrete variable with respect to other discrete variables. In general Bayesian networks, the CPTs that link the proficiency variable and observable outcomes are not necessarily monotonic, but they are often constrained to be monotonic in educational applications. The monotonicity constraint states that if an examinee shows an improvement on a proficiency variable (parent variable), the individual performance on an observable (child variable) should improve. For example, if a student has a higher writing skill, then this student is likely to score better on an essay task. For educational research, building parametric models (i.e., DiBello models) with the Expectation-Maximization algorithm provides monotonic conditional probability tables (CPT). This dissertation explored the effectiveness of the EM algorithm within the DiBello parameterization under different sample sizes, test forms, and item structures. The data generation model specifies two skill variables with a different number of items depending on the test forms. The outcome measures were the relative bias of the parameters to assess parameter recovery, Kullback-Leibler distance to evaluate the distance between CPTs, and Cohen's 觀 to assess classification agreement between data generation and estimation models. The simulation study results showed that a minimum sample size of 400 was sufficient to produce acceptable parameter bias and KL distance. A balanced distribution of simple and integrated type items produced less bias compared to an unbalanced item distribution. The parameterized EM algorithm stabilized the estimates for cells small sizes in CPTs, providing minimal KL distance values. However, the classification agreement between generated and estimated models was low. |
590 | |
▼a School code: 0071. |
650 | 4 |
▼a Educational tests & measurements. |
650 | 4 |
▼a Educational psychology. |
650 | 4 |
▼a Education. |
690 | |
▼a 0288 |
690 | |
▼a 0525 |
690 | |
▼a 0515 |
710 | 20 |
▼a The Florida State University.
▼b Educational Psychology & Learning Systems. |
773 | 0 |
▼t Dissertations Abstracts International
▼g 81-04A. |
773 | |
▼t Dissertation Abstract International |
790 | |
▼a 0071 |
791 | |
▼a Ph.D. |
792 | |
▼a 2019 |
793 | |
▼a English |
856 | 40 |
▼u http://www.riss.kr/pdu/ddodLink.do?id=T15490689
▼n KERIS
▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |
980 | |
▼a 202002
▼f 2020 |
990 | |
▼a ***1816162 |
991 | |
▼a E-BOOK |