Word embedding models were most predominantly used in many of the NLP tasks such as document classification, author identification, story understanding etc. In this paper we make a comparison of two Word embedding models for semantic similarity in Tamil language. Each of those two models has its own way of predicting relationship between words in a corpus. Method/Analysis: The term Word embedding in Natural Language Processing is a representation of words in terms of vectors. Word embedding is used as an unsupervised approach instead of traditional way of feature extraction. Word embedding models uses neural networks to generate numerical representation for the given words. In order to find the best model that captures semantic relationship between words, using a morphologically rich language like Tamil would be great. Tamil language is one of the oldest Dravidian languages and it is known for its morphological richness. In Tamil language it is possible to construct 10,000 words from a single root word. Findings: Here we make comparison of Content based Word embedding and Context based Word embedding models respectively. We tried different feature vector sizes for the same word to comment on the accuracy of the models for semantic similarity. Novelty/Improvement: Analysing Word embedding models for morphologically rich language like Tamil helps us to classify the words better based on its semantics.
S. G. Ajay, Srikanth, M., Dr. M. Anand Kumar, and Soman, K. P., “Word Embedding Models for Finding Semantic Relationship between Words in Tamil Language”, Indian Journal of Science and Technology, vol. 9, 2016.