MARC보기
LDR00000nam u2200205 4500
001000000431569
00520200224102806
008200131s2017 ||||||||||||||||| ||eng d
020 ▼a 9781392335659
035 ▼a (MiAaPQ)AAI13918160
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 371
1001 ▼a Xiong, Yao.
24510 ▼a Modeling Peer Assessment Scores in Massive Open Online Courses: A Bayesian Item Response Theory Approach.
260 ▼a [S.l.]: ▼b The Pennsylvania State University., ▼c 2017.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2017.
300 ▼a 163 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-01, Section: A.
500 ▼a Publisher info.: Dissertation/Thesis.
500 ▼a Advisor: Suen, Hoi K.
5021 ▼a Thesis (Ph.D.)--The Pennsylvania State University, 2017.
520 ▼a Massive open online courses (MOOCs) have proliferated in recent years in higher education and have become popularized for their features of open access and large-scale interactive participation. MOOCs have provided promising supplementary education for college students, professionals, etc. The assessment methods in MOOCs are different from those in traditional settings. The large scale of student population enrolled in a MOOC requires self-sustainable assessment methods. So far, machine automated grading and peer assessment have been two primary assessment methods in MOOCs. While the former is mainly used for multiple-choice questions, the latter is for open-ended assignment or projects.A major concern about peer assessment is the lack of peer rater credibility in that peers may not be able to assign reliable and accurate ratings to their peers. In this study, a Graded Response Model (GRRM) with rater effect within a Bayesian framework is proposed and used to examine MOOC peer assessment. The model performance is evaluated under different simulated conditions, e.g., different missing data amounts, categories of rating scales, methods of assigning raters to assignments, and different MOOC-specific rating designs. Application of the model in a real-life MOOC peer assessment scenario is also illustrated to further demonstrate its applicability in real-life situations.The results show that the proposed approach is robust to missing data. It is also found that ensuring balanced amount of raters per assignment and balanced amount of assignments assigned to each rater is the best method in terms of estimation accuracy. In addition, adding expert ratings in the model improves the estimation for ratee true ability scores while adding common assignments to be graded by all raters improves the estimation for rater effect parameters. The real-life analysis results indicate that application of the proposed approach to a real-life MOOC peer assessment dataset is reasonable with empirical evidence supporting the interpretations of the estimated results.
590 ▼a School code: 0176.
650 4 ▼a Educational tests & measurements.
650 4 ▼a Educational psychology.
650 4 ▼a Educational technology.
690 ▼a 0288
690 ▼a 0525
690 ▼a 0710
71020 ▼a The Pennsylvania State University. ▼b Educational Psychology.
7730 ▼t Dissertations Abstracts International ▼g 81-01A.
773 ▼t Dissertation Abstract International
790 ▼a 0176
791 ▼a Ph.D.
792 ▼a 2017
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15492709 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK