Theses Doctoral

A Hypothesis Testing Procedure Designed for Q-Matrix Validation of Diagnostic Classification Models

Sachdeva, Ruchi Jain

Cognitive diagnosis models have become very popular largely because these models provide educators with an explanation for a student not performing well based on skills that have not yet been mastered, making it possible for educators to provide targeted remediation and tailor instruction to address individual strengths and weaknesses. However, in order for these procedures to be effective, the Q-matrix which establishes the relationships between latent variables representing knowledge structures (columns) and individual items on an assessment (rows) must be carefully considered. The goal of this work is to develop a new test statistic for the detection of model misspecifications of the Q-matrix, which include both underfitting the Q-matrix and overfitting the Q-matrix. In addition to the development of this new test statistic, this dissertation evaluated the performance of this new test statistic and developed an estimator of the asymptotic variance based on the Fisher Information Matrix of the slip and guess parameters.
The test statistic was evaluated by two simulation studies and also applied to the fraction subtraction dataset. The first simulation study investigated the true Type-I error rates for the test under four levels of sample size, three levels of correlation among attributes and three levels of item discrimination. Results showed that as the sample size increases the Type I error reduces to 5%. Surprisingly, the results for the relationship between Type I error and Item discrimination show that the most discriminating items (Item Discrimination of 4) have the largest Type I error rates. The power study showed that the statistic is very powerful in the detection of under-specification or over-specification of the Q-matrix with large sample sizes and/or when items are highly discriminating between students that have mastered or have not mastered a skill. Interestingly, the results when the Q matrix has multiple misspecifications the detection of under-specification is better than for over-specification when two misclassifications are being tested simultaneously. The analysis of the fraction subtraction dataset found 15% of the q-entries had enough evidence to reject the Null hypothesis. This clearly indicates that the test finds misfit in the original expert designed Q-matrix.

Files

  • thumnail for Sachdeva_columbia_0054D_14595.pdf Sachdeva_columbia_0054D_14595.pdf application/pdf 4.27 MB Download File

More About This Work

Academic Units
Measurement and Evaluation
Thesis Advisors
Johnson, Matthew
Degree
Ph.D., Columbia University
Published Here
May 14, 2018