Research Article
BibTex RIS Cite

Evaluating knowledge levels of students with a Computerized Adaptive Test

Year 2023, Volume: 5 Issue: 2, 921 - 932, 28.12.2023
https://doi.org/10.51535/tell.1280384

Abstract

The present study aims to develop and evaluate a CAT system for multiple-choice success tests. In this regard the measurement results obtained from the CAT system based on Item Response Theory (IRT) and the test mode based on the Classical Test Theory (CTT) were compared in terms of the knowledge level of the students, reliability of measurement and number of items. The study was conducted on 873 students from a state-university in Turkey. The research took three years three years and involved three phases. According to the findings of the present study, with the developed CAT system, academic success levels of the students were determined with very high reliability. When all the items in the test were scored according to both constructs, the relationship between the scores obtained was found to be high and meaningful in a positive direction. Moreover, there is a high, positive, significant relationship between students’ levels of knowledge that are estimated in CAT systems and the score of an achievement test they received from a Paper & Pencil test. Finally, the number of questions the students were exposed to in the CAT system was reduced by 50% compared to the test based on the CTT.

Project Number

MMF.A4.18.008

References

  • Binet, A., & Simon, T. A. (1905). Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. L’Année Psychologique, 11, 191-244.
  • Chalmers, R. P. (2016). Generating Adaptive and Non-Adaptive Test Interfaces for Multidimensional Item Response Theory Applications. Journal of Statistical Software, 71(5), 1-39. http://doi.org/10.18637/jss.v071.i05.
  • Chao, R.-C., Kuo, B.-C., & Tsai, Y.-H. (2015). Development of Chinese Computerized Adaptive Test System Based on Higher-Order Item Response Theory. International Journal of Innovative Computing, Information and Control, 11(1), 57–76.
  • Chen, C. M., & Chung, C. J. (2008). Personalized mobile English vocabulary learning system based on item response theory and learning memory cycle. Computers & Education, 51(2), 624–645. http://doi.org/10.1016/j.compedu.2007.06.011.
  • Cisar, S. M., Cisar, P., & Pinter, R. (2016). Evaluation of knowledge in Object Oriented Programming course with computer adaptive tests c Cisar. Computers & Education, 93, 142–160. http://doi.org/10.1016/j.compedu.2015.10.016.
  • Crocker, L., & Algina, J. (1986). Introduction Classical and Modern Test Theory. Holt, Rinehart and Winston, 6277 Sea Harbor Drive, Orlando, FL 32887.
  • Çelen, Ü. & Aybek, E. C. (2013). Examination of the correlation between student achievements and the items that construct the test according to the Classical Test Theory and Item Response Theory. Journal of Measurement and Evaluation in Education and Psychology, 4(2), 64–75.
  • Embretson, E. & Reise, S. P. (2000). Item response theory for psychologist principles and application. London: Lawrence Erlbaum Assc.
  • Gardner, W., Shear, K., Kelleher, K., Pajer, K., Mammen, O., Buysse, D., & Frank, E. (2004). Computerized adaptive measurement of depression: A simulation study. BMC Psychiatry, 4(1), 13-23.
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. California: Sage Publications.
  • Huang, Y. M., Lin, Y. T., & Cheng, S. C. (2009). An adaptive testing system for supporting versatile educational assessment. Computers and Education, 52(1), 53–67. http://doi.org/10.1016/j.compedu.2008.06.007
  • Kalender, İ. (2011). Effects of different computerized adaptive testing strategies on recovery of ability. Unpublished Doctoral Thesis. Ankara: ODTU.
  • Kaptan, F. (1993). Comparison of the computerized, individual adaptive test method and paper-pencil test with respect to estimate the ability. [Unpublished Doctoral dissertation, University of Hacettepe]. National thesis center of the Republic of Turkey.
  • Kuo, B. C., Daud, M., & Yang, C.-W. (2015). Multidimensional Computerized Adaptive Testing for Indonesia Junior High School Biology. EURASIA Journal of Mathematics, Science & Technology Education, 11(5), 1105–1118. http://doi.org/10.12973/eurasia.2015.1384a.
  • Liu, Y., & Zhao, X. (2017). Design Flow of English Learning System Based on Item Response Theory. International Journal of Emerging Technologies in Learning (iJET), 12(12), 91–102.
  • Lord, F.M., & Novick, M.R. (2008). Statistical theories of mental test scores. IAP.
  • Özbaşı, D., & Demirtaşlı, N. (2015). Development of Computer Literacy Test as Computerized Adaptive Testing. Journal of Measurement and Evaluation in Education and Psychology, 6(2), 218–237.
  • Özyurt, H. (2013). The Development and Evaluation of a Web Based Adaptive Testing System: The Case of Probability Unit. Unpublished Doctoral Thesis. Trabzon: KTU.
  • Pelanek, R. (2016). Applications of the Elo rating system in adaptive educational systems nek. Computers & Education, 98, 169–179. http://doi.org/10.1016/j.compedu.2016.03.017.
  • Petscher, Y., Foorman, B. R., & Truckenmiller, A. J. (2017). The Impact of Item Dependency on the Efficiency of Testing and Reliability of Student Scores from a Computer Adaptive Assessment of Reading Comprehension. Journal of Research on Educational Effectiveness, 10(2), 408–423. http://doi.org/10.1080/19345747.2016.1178361.
  • Tseng, W.T. (2016). Measuring English vocabulary size via computerized adaptive testing. Computers and Education, 97, 69–85. http://doi.org/10.1016/j.compedu.2016.02.018.
  • Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., & Mislevy, R. J. (2000). Computerized adaptive testing: A primer. Routledge.
  • Weiss, D. (1982). “Improving Measurement Quality and Efficiency with Adaptive Testing.” Applied Psychological Measurement, 6(4), 479–492. http://doi:10.1177/014662168200600408.
  • Wise, S. L., & Kingsbury, G. G. (2000). Practical issues in developing and maintaining a computerized adaptive testing program. Psicologica, 21, 135-155.
  • Wu, H. M., Kuo, B. C., & Wang, S. C. (2018). Computerized Dynamic Adaptive Tests with Immediately Individualized Feedback for Primary School Mathematics Learning. International Forum of Educational Technology & Society, 20(1), 61–72.
  • Zhang, L., & VanLehn, K. (2017). Adaptively selecting biology questions generated from a semantic network. Interactive Learning Environments, 25(7), 828–846. http://doi.org/10.1080/10494820.2016.1190939.
Year 2023, Volume: 5 Issue: 2, 921 - 932, 28.12.2023
https://doi.org/10.51535/tell.1280384

Abstract

Supporting Institution

Kırşehir Ahi Evran Üniversitesi BAP Koordinatörlüğü

Project Number

MMF.A4.18.008

References

  • Binet, A., & Simon, T. A. (1905). Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. L’Année Psychologique, 11, 191-244.
  • Chalmers, R. P. (2016). Generating Adaptive and Non-Adaptive Test Interfaces for Multidimensional Item Response Theory Applications. Journal of Statistical Software, 71(5), 1-39. http://doi.org/10.18637/jss.v071.i05.
  • Chao, R.-C., Kuo, B.-C., & Tsai, Y.-H. (2015). Development of Chinese Computerized Adaptive Test System Based on Higher-Order Item Response Theory. International Journal of Innovative Computing, Information and Control, 11(1), 57–76.
  • Chen, C. M., & Chung, C. J. (2008). Personalized mobile English vocabulary learning system based on item response theory and learning memory cycle. Computers & Education, 51(2), 624–645. http://doi.org/10.1016/j.compedu.2007.06.011.
  • Cisar, S. M., Cisar, P., & Pinter, R. (2016). Evaluation of knowledge in Object Oriented Programming course with computer adaptive tests c Cisar. Computers & Education, 93, 142–160. http://doi.org/10.1016/j.compedu.2015.10.016.
  • Crocker, L., & Algina, J. (1986). Introduction Classical and Modern Test Theory. Holt, Rinehart and Winston, 6277 Sea Harbor Drive, Orlando, FL 32887.
  • Çelen, Ü. & Aybek, E. C. (2013). Examination of the correlation between student achievements and the items that construct the test according to the Classical Test Theory and Item Response Theory. Journal of Measurement and Evaluation in Education and Psychology, 4(2), 64–75.
  • Embretson, E. & Reise, S. P. (2000). Item response theory for psychologist principles and application. London: Lawrence Erlbaum Assc.
  • Gardner, W., Shear, K., Kelleher, K., Pajer, K., Mammen, O., Buysse, D., & Frank, E. (2004). Computerized adaptive measurement of depression: A simulation study. BMC Psychiatry, 4(1), 13-23.
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. California: Sage Publications.
  • Huang, Y. M., Lin, Y. T., & Cheng, S. C. (2009). An adaptive testing system for supporting versatile educational assessment. Computers and Education, 52(1), 53–67. http://doi.org/10.1016/j.compedu.2008.06.007
  • Kalender, İ. (2011). Effects of different computerized adaptive testing strategies on recovery of ability. Unpublished Doctoral Thesis. Ankara: ODTU.
  • Kaptan, F. (1993). Comparison of the computerized, individual adaptive test method and paper-pencil test with respect to estimate the ability. [Unpublished Doctoral dissertation, University of Hacettepe]. National thesis center of the Republic of Turkey.
  • Kuo, B. C., Daud, M., & Yang, C.-W. (2015). Multidimensional Computerized Adaptive Testing for Indonesia Junior High School Biology. EURASIA Journal of Mathematics, Science & Technology Education, 11(5), 1105–1118. http://doi.org/10.12973/eurasia.2015.1384a.
  • Liu, Y., & Zhao, X. (2017). Design Flow of English Learning System Based on Item Response Theory. International Journal of Emerging Technologies in Learning (iJET), 12(12), 91–102.
  • Lord, F.M., & Novick, M.R. (2008). Statistical theories of mental test scores. IAP.
  • Özbaşı, D., & Demirtaşlı, N. (2015). Development of Computer Literacy Test as Computerized Adaptive Testing. Journal of Measurement and Evaluation in Education and Psychology, 6(2), 218–237.
  • Özyurt, H. (2013). The Development and Evaluation of a Web Based Adaptive Testing System: The Case of Probability Unit. Unpublished Doctoral Thesis. Trabzon: KTU.
  • Pelanek, R. (2016). Applications of the Elo rating system in adaptive educational systems nek. Computers & Education, 98, 169–179. http://doi.org/10.1016/j.compedu.2016.03.017.
  • Petscher, Y., Foorman, B. R., & Truckenmiller, A. J. (2017). The Impact of Item Dependency on the Efficiency of Testing and Reliability of Student Scores from a Computer Adaptive Assessment of Reading Comprehension. Journal of Research on Educational Effectiveness, 10(2), 408–423. http://doi.org/10.1080/19345747.2016.1178361.
  • Tseng, W.T. (2016). Measuring English vocabulary size via computerized adaptive testing. Computers and Education, 97, 69–85. http://doi.org/10.1016/j.compedu.2016.02.018.
  • Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., & Mislevy, R. J. (2000). Computerized adaptive testing: A primer. Routledge.
  • Weiss, D. (1982). “Improving Measurement Quality and Efficiency with Adaptive Testing.” Applied Psychological Measurement, 6(4), 479–492. http://doi:10.1177/014662168200600408.
  • Wise, S. L., & Kingsbury, G. G. (2000). Practical issues in developing and maintaining a computerized adaptive testing program. Psicologica, 21, 135-155.
  • Wu, H. M., Kuo, B. C., & Wang, S. C. (2018). Computerized Dynamic Adaptive Tests with Immediately Individualized Feedback for Primary School Mathematics Learning. International Forum of Educational Technology & Society, 20(1), 61–72.
  • Zhang, L., & VanLehn, K. (2017). Adaptively selecting biology questions generated from a semantic network. Interactive Learning Environments, 25(7), 828–846. http://doi.org/10.1080/10494820.2016.1190939.
There are 26 citations in total.

Details

Primary Language English
Subjects Other Fields of Education
Journal Section Research Articles
Authors

Mustafa Yağcı 0000-0003-2911-3909

Project Number MMF.A4.18.008
Early Pub Date December 17, 2023
Publication Date December 28, 2023
Acceptance Date December 16, 2023
Published in Issue Year 2023 Volume: 5 Issue: 2

Cite

APA Yağcı, M. (2023). Evaluating knowledge levels of students with a Computerized Adaptive Test. Journal of Teacher Education and Lifelong Learning, 5(2), 921-932. https://doi.org/10.51535/tell.1280384

2617220107