<p>Sign Language is a visual spatial language used by deaf and dumb community to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper proposes a novel 3D stroke based representation of dynamic gestures of Sign Language Signs incorporating local as well as global motion information. The dynamic gesture trajectories are segmented into strokes or sub-units based on Key Maximum Curvature Points (KMCPs) of the trajectory. This new representation has helped us in uniquely representing the signs with fewer number of key frames. We extract 3D global features from global trajectories using a scheme of representing strokes as 3D codes, which involves dividing strokes into smaller units (stroke subsegment vectors or SSVs), and representing them as belonging to one of the 22 partitions. These partitions are obtained using a discretisation procedure which we call an equivolumetric partition (EVP) of sphere. The codes representing the strokes are referred to as an EVP code. In addition to global hand motion and local hand motion, facial expressions are also considered for non-manual signs to interpret the meaning of words completely. In contrast to existing methods, our method of stroke based representation has less expensive training phase since it only requires the training of key stroke features and stroke sequences of each word. © 2017 Springer Science+Business Media New York</p>
cited By 0; Article in Press
M. Geetha and Dr. Kaimal, M. R., “A 3D stroke based representation of sign language signs using key maximum curvature points and 3D chain codes”, Multimedia Tools and Applications, pp. 1-34, 2017.