Publication Type:

Conference Paper

Source:

AFRICOMM 2013, Springer Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (LNICST) (2014)

Keywords:

AES, Essay scoring, Feature extraction, Latent Semantic Analysis (LSA), Natural Language Process- NLP, SVD, text analysis, Text mining

Abstract:

<p>In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation &amp; Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students’ responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. ‘A-TEST’ was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived ‘A-TEST’ model and the human raters were comparable to those between the human raters.</p>

Cite this Research Publication

P. Prof. Nedungadi, L, J., and Raghu Raman, “Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation & Scoring Tool”, in AFRICOMM 2013, Springer Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (LNICST) , 2014.

It appears your Web browser is not configured to display PDF files. Download adobe Acrobat or click here to download the PDF file.

207
PROGRAMS
OFFERED
5
AMRITA
CAMPUSES
15
CONSTITUENT
SCHOOLS
A
GRADE BY
NAAC, MHRD
8th
RANK(INDIA):
NIRF 2018
150+
INTERNATIONAL
PARTNERS