Enhancing English Language Assessment through Adaptive Testing Systems: A Psychometric and Technological Approach
Keywords:
adaptive testing, English language assessment, psychometrics, IRT, AI scoring, learner modeling, digital assessmentAbstract
In recent years, adaptive testing has emerged as a transformative approach in English language assessment, combining psychometric rigor with digital innovation. Unlike conventional exams, adaptive systems adjust item difficulty dynamically according to the test-taker’s responses, providing precise, efficient, and personalized measurements. This paper examines adaptive testing from theoretical, technological, and pedagogical perspectives. It explores Item Response Theory (IRT), item selection algorithms, automated scoring, and ethical considerations including fairness and accessibility. The study concludes that adaptive testing can enhance assessment reliability and learner engagement when designed responsibly, offering promising directions for both high-stakes examinations and classroom evaluations.
References
Brown, H. D. (2020). Language Assessment: Principles and Classroom Practices (3rd ed., pp. 42–53). Pearson.
Hambleton, R., & Swaminathan, H. (2017). Item Response Theory: Principles and Applications (pp. 88–102). Springer.
Kunnan, A. (2018). Fairness and Ethics in Language Assessment. Language Testing Review Quarterly, 12(1), 60–80. pp. 67–72.
Lu, X., & Lin, D. (2022). Artificial Intelligence in Language Testing: Opportunities and Risks. Journal of Applied Linguistics, 39(2), 70–90. pp. 77–82.
McNamara, T. (2019). Language Testing (pp. 119–125). Oxford University Press.
Weiss, D. J. (2011). Foundations of Adaptive Testing (pp. 54–61). Springer.
