In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students’ responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. ‘A-TEST’ was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived ‘A-TEST’ model and the human raters were comparable to those between the human raters.
Prema Nedungadi, L, J., and Raghu Raman, “Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool”, in e-Infrastructure and e-Services for Developing Countries: 5th International Conference, AFRICOMM 2013, Blantyre, Malawi, November 25-27, 2013, Revised Selected Papers, Cham: Springer International Publishing, 2014, pp. 271–281.