대구한의대학교 향산도서관

상세정보

부가기능

Item Cluster-Based Assessment: Modeling and Design

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Item Cluster-Based Assessment: Modeling and Design.
개인저자Arneson, Amy.
단체저자명University of California, Berkeley. Education.
발행사항[S.l.]: University of California, Berkeley., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항134 p.
기본자료 저록Dissertations Abstracts International 81-04B.
Dissertation Abstract International
ISBN9781085784238
학위논문주기Thesis (Ph.D.)--University of California, Berkeley, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-04, Section: B.
Advisor: Wilson, Mark.
이용제한사항This item must not be sold to any third party vendors.
요약This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designedThere should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon. Specifically, for item response theory (IRT) models in educational assessment scores, the structure of dependencies among items that are designed as item clusters (groups of items that share common stimulus material, etc) should be reflected in the model. This type of designed local item dependence (LID) can be modeled in many different ways. The literature on the existence of LID and models developed to account for this LID is somewhat extensive, though there is little work to unify and organize these different approaches. The first paper presents a general framework to guide modeling decisions for item cluster-based assessments by first formalizing some of the terminology used in the context of LID, providing an overview of methods for detecting LID, and discussing general modeling approaches for response data that is theorized to exhibit LID.Recent pushes for increased rigor and focus on complex constructs (such as critical thinking) in K-16 education highlight a need to develop assessments that measure these complex constructs. The second paper explores these issues in the context of a particular complex constructs in statistics education, that of Linking Data to a Claim (LDC), Meta-Representation Competence (MRC), and Formal Inference (FoI). We present a multidimensional treatment and analysis of field test data for the Critical Reasoning for College-Readiness (CR4CR) Assessment, an item cluster-based assessment. We found that the LDC and FoI items as written can provide a mapping of student ability estimates to the construct map levels as defined, but that the MRC items do not. Further, as expected, we found moderately strong correlations among the three constructs.The third paper describes the design of selected response items based on open-ended counterparts for the CR4CR Assessment, and the empirical comparisons of these different formats. It is commonly thought that multiple choice (or selected response) items on tests do not provide useful information to educators regarding higher level thinking skills such as argumentation or critical thinking. However, there is also a need for diagnostic assessments to provide educators with timely feedback on student performance so that instruction can be adapted or interventions administered based upon student needs. We found that though existing literature suggests that selected response item types are easier, in general, than constructed response item types, this may not be the case for all constructs. We found that, for the LDC and FoI constructs, multi-select multiple choice items behaved similarly to their constructed response counterparts.
일반주제명Educational tests & measurements.
Mathematics education.
Statistics.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼