Açıklayıcı Madde Tepki Modellerinin Bilgisayar Ortamında Bireye Uyarlanmış Testlerde Kullanımı
xmlui.mirage2.itemSummaryView.MetaDataShow full item record
The item bank development stage in computer adaptive testing is extremely challenging. It is assumed that the item difficulties are constant among different sub-groups, different positions and various test forms. The items violate these assumptions are eliminated. This may result in more time-consuming item pool development stage. The main purpose of this research is to investigate how average test length, item exposure, test overlap and precision of ability parameters change when explanatory item response models are utilized in computer adaptive testing. The study analysis conducted with simulated 10 item pools with 100 items and 1440 candidates in each. Each item bank calibrated using Rasch model, latent regression, linear logistic test model and latent regression linear logistic test model. Next, response patterns and prior ability estimates used for post-hoc simulations conducted in 10 replications for each item bank and models. The simulations are based on EAP estimation, two stop rules (precision and minInfo) and the item exposure control rule randomesque. The computer adaptive testing simulations based on explanatory item response models conducted using a modified version of “catR”. It is reported that if the sub-groups in population are ignored in post-hoc simulations all models estimate very similar ability score mean. It also found that explanatory item response models have no effect on average test length, test overlap and item exposure rate. It is an important finding that latent regression and linear logistic test model achieved to reduce item exposure rate for the first 20 items.