자료유형 | 학위논문 |
---|---|
서명/저자사항 | Modeling Peer Assessment Scores in Massive Open Online Courses: A Bayesian Item Response Theory Approach. |
개인저자 | Xiong, Yao. |
단체저자명 | The Pennsylvania State University. Educational Psychology. |
발행사항 | [S.l.]: The Pennsylvania State University., 2017. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2017. |
형태사항 | 163 p. |
기본자료 저록 | Dissertations Abstracts International 81-01A. Dissertation Abstract International |
ISBN | 9781392335659 |
학위논문주기 | Thesis (Ph.D.)--The Pennsylvania State University, 2017. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-01, Section: A.
Publisher info.: Dissertation/Thesis. Advisor: Suen, Hoi K. |
요약 | Massive open online courses (MOOCs) have proliferated in recent years in higher education and have become popularized for their features of open access and large-scale interactive participation. MOOCs have provided promising supplementary education for college students, professionals, etc. The assessment methods in MOOCs are different from those in traditional settings. The large scale of student population enrolled in a MOOC requires self-sustainable assessment methods. So far, machine automated grading and peer assessment have been two primary assessment methods in MOOCs. While the former is mainly used for multiple-choice questions, the latter is for open-ended assignment or projects.A major concern about peer assessment is the lack of peer rater credibility in that peers may not be able to assign reliable and accurate ratings to their peers. In this study, a Graded Response Model (GRRM) with rater effect within a Bayesian framework is proposed and used to examine MOOC peer assessment. The model performance is evaluated under different simulated conditions, e.g., different missing data amounts, categories of rating scales, methods of assigning raters to assignments, and different MOOC-specific rating designs. Application of the model in a real-life MOOC peer assessment scenario is also illustrated to further demonstrate its applicability in real-life situations.The results show that the proposed approach is robust to missing data. It is also found that ensuring balanced amount of raters per assignment and balanced amount of assignments assigned to each rater is the best method in terms of estimation accuracy. In addition, adding expert ratings in the model improves the estimation for ratee true ability scores while adding common assignments to be graded by all raters improves the estimation for rater effect parameters. The real-life analysis results indicate that application of the proposed approach to a real-life MOOC peer assessment dataset is reasonable with empirical evidence supporting the interpretations of the estimated results. |
일반주제명 | Educational tests & measurements. Educational psychology. Educational technology. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |