대구한의대학교 향산도서관

상세정보

부가기능

Modeling Peer Assessment Scores in Massive Open Online Courses: A Bayesian Item Response Theory Approach

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Modeling Peer Assessment Scores in Massive Open Online Courses: A Bayesian Item Response Theory Approach.
개인저자Xiong, Yao.
단체저자명The Pennsylvania State University. Educational Psychology.
발행사항[S.l.]: The Pennsylvania State University., 2017.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2017.
형태사항163 p.
기본자료 저록Dissertations Abstracts International 81-01A.
Dissertation Abstract International
ISBN9781392335659
학위논문주기Thesis (Ph.D.)--The Pennsylvania State University, 2017.
일반주기 Source: Dissertations Abstracts International, Volume: 81-01, Section: A.
Publisher info.: Dissertation/Thesis.
Advisor: Suen, Hoi K.
요약Massive open online courses (MOOCs) have proliferated in recent years in higher education and have become popularized for their features of open access and large-scale interactive participation. MOOCs have provided promising supplementary education for college students, professionals, etc. The assessment methods in MOOCs are different from those in traditional settings. The large scale of student population enrolled in a MOOC requires self-sustainable assessment methods. So far, machine automated grading and peer assessment have been two primary assessment methods in MOOCs. While the former is mainly used for multiple-choice questions, the latter is for open-ended assignment or projects.A major concern about peer assessment is the lack of peer rater credibility in that peers may not be able to assign reliable and accurate ratings to their peers. In this study, a Graded Response Model (GRRM) with rater effect within a Bayesian framework is proposed and used to examine MOOC peer assessment. The model performance is evaluated under different simulated conditions, e.g., different missing data amounts, categories of rating scales, methods of assigning raters to assignments, and different MOOC-specific rating designs. Application of the model in a real-life MOOC peer assessment scenario is also illustrated to further demonstrate its applicability in real-life situations.The results show that the proposed approach is robust to missing data. It is also found that ensuring balanced amount of raters per assignment and balanced amount of assignments assigned to each rater is the best method in terms of estimation accuracy. In addition, adding expert ratings in the model improves the estimation for ratee true ability scores while adding common assignments to be graded by all raters improves the estimation for rater effect parameters. The real-life analysis results indicate that application of the proposed approach to a real-life MOOC peer assessment dataset is reasonable with empirical evidence supporting the interpretations of the estimated results.
일반주제명Educational tests & measurements.
Educational psychology.
Educational technology.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼