Mark J Gierl

Mark J. Gierl

Professor; Canada Research Chair in Educational Measurement, Centre for Research in Applied Measurement and Evaluation (CRAME)

Educational Psychology
mark.gierl@ualberta.ca
http://www.ualberta.ca/~mgierl/


Research Interests

Educational and psychological measurement, focusing on assessment engineering, including cognitive modeling, automatic item generation, automated test assembly, and automatic essay scoring; cognitive diagnostic assessment; differential item and bundle functioning; unidimensional and multidimensional item response theory; psychometric methods for evaluating test translation and adaptation.



Representative Publications

Refereed Publications

Gierl, M. J., Bulut, O., Gao, Q., & Zhang, X. (in press). Developing, analyzing, and using distractors for multiple-choice tests: A comprehensive review. Review of Educational Research.

Gierl, M. J., & Lai, H. (in press). Using automatic item generation to support individualized learning.  Invited paper to appear in Special Issue of Applied Psychological Measurement.

Gierl, M. J., Daniels, L., & Zhang, X. (2017). Creating parallel forms to support on-demand testing for undergraduate students in psychology. Journal of Measurement and Evaluation in Education and Psychology, 8, 298-303.

Gierl, M. J. and Lai, H. (2016). A process for reviewing and evaluating generated test items. Educational Measurement: Issues and Practice, 35, 6–20.

Lai, H., Gierl, M. J., Touchie, C., Pugh, D., Boulais, A., & De Champlain, A. (2016). Using automatic item generation to improve the quality of MCQ distractors. Teaching & Learning in Medicine, 28, 166-173.

Gierl, M. J., Lai, H., Hogan, J., & Matovinovic, D. (2015). A method for generating test items that are aligned to the Common Core State Standards. Journal of Applied Testing Technology, 16, 1-18.

Gierl, M. J., & Lai. H. (2015). Using automated processes to generate test items and their associated solutions and rationales to support formative feedback. Interaction Design & Architecture(s)—IxD&A Journal, N.25, 9-20. Special Issue on Technology-Enhanced Assessment: Agency Change in the Educational Eco-System, Marco Kalz, Eric Ras, & Denise Whitelock (Guest Editors).

Gierl, M. J., & Lai, H. (2015). Using automated processes to generate English and French test items simultaneously. Mesure et évaluation en éducationMeasurement and Evaluation in Education, 37, 39-61. Invited Paper appearing in Special Issue on Methodological Advances in Assessment, François Vachon (Guest Editor).

Gierl, M. J., Latifi, F., Lai, H., Boulais, A-P, & De Champlain, A. (2014). Automated essay scoring and the future of assessment in medical education. Medical Education, 48, 950-962.

Edited Books

Gierl, M. J., & Haladyna, T. (Eds.) (2013). Automatic item generation: Theory and practice. New York: Routledge.

Leighton, J. P., & Gierl, M. J. (Eds.) (2007). Cognitive diagnostic assessment for education: Theory and applications.Cambridge, UK: Cambridge University Press. [Winner of the 2009 American Educational Research Association (Division D) Significant Contribution to Educational Measurement and Research Methodology Award.]


Courses Taught

  • EDPY 500  Introduction to Data Analysis in Educational Research
  • EDPY 505  Quantitative Methods I
  • EDPY 507  Measurement Theory I
  • EDPY 597  Assessment and Evaluation in the Health Sciences (Part I)
  • EDPY 608  Doctoral Seminar in Educational Measurement (Construct Validity)
  • EDPY 697  Test Score Equating

Please contact me if you have questions about my research or our program of studies in the Department. I would be pleased to talk with you. You can also access the CRAME web page at http://www.crame.ualberta.ca/