Publication Type:

Journal Article

Source:

International Journal of Applied Engineering Research, Volume 9, Number 20, p.6747-6755 (2014)

URL:

http://www.scopus.com/inward/record.url?eid=2-s2.0-84939130176&partnerID=40&md5=18df2ee833262941c8b41ae77446a0cc

Abstract:

Conventional code evaluation systems focus on output matching, with little importance being given to evaluating programming style and practice. However, judgement of coding practice is vital to aid the process of learning how to program. We hence propose to define a framework that evaluates source code by judging good coding practice, rather than by matching the output with predetermined test cases. Since the scope of the problem is large, we plan to implement it for a particular programming platform and paradigm. Our proposed approach is to use well established code metrics in order to evaluate training data, which can be fed to a supervised learning framework. The major challenge that we have identified so far is to come up with a definition of parameters that indicates 'good' coding style. We plan to resolve this issue by training our framework to learn to recognize the optimal values and combination of code metrics in order to comprehensively evaluate coding style. © Research India Publications.

Notes:

cited By 0

Cite this Research Publication

G. K. Deivanayagam, Gayathiri, D., Manikandan, A., Karthik, K. R. Raghul, Jeyakumar, G., and Kriti, N., “Learning to identify bad coding practice”, International Journal of Applied Engineering Research, vol. 9, pp. 6747-6755, 2014.