The Effect of Item Selection and Parameter Estimation Methods to the Accuracy of Pretest Item Parameters on Online Calibration in CAT
Özet
Computerized adaptive test (CAT) has a possible risk that the quality of the item bank decreases over time due to the exposed items. The best possible and advantageous solution to this is to implement the online calibration. This study was aimed to investigate the effect of online calibration components on precision in parameter estimation and the cumulative sample size (specified to the calibration method). It was also aimed to transfer Joint Maximum Likelihood as a pretest calibration method to the online calibration procedure and assess this method’s feasibility. The simulation study was conducted under one-parameter logistic (1-PL) and two-parameter logistic (2-PL) model to compare the pretest item selection methods (Maximum Fisher Information-MFI, D-optimal value design-DVOD, and Bayesian D-optimal design-BDOD), the parameter estimation methods (Joint Maximum Likelihood-JML and Marginal Maximum Likelihood with One EM Cycle-OEM), the sample size of the random calibration stage (250, 500, and 1000) and the calibration sample size of per pretest item (250, 500, and 1000). The performance of these factors on the parameter precision was evaluated by calculating bias and root mean squared error (RMSE). The results indicate that the performances of item selection methods differ according to Item Response Theory (IRT) models and the parameter estimation methods. Among the calibration methods, OEM has successfully estimated the most precise item parameters although JML performed better in some conditions. The sample size of the random stage did not have a characteristic effect on parameter estimation. Lastly, the parameter accuracy gets higher as the calibration sample size increases.