Dysarthria is a set of congenital and traumatic neuromotor disorders that impair the physical production of speech. These impairments reduce or remove the normal control of the vocal articulators. The acoustic characteristics of dysarthric speech is very different from the speech signal collected from a normative population, with relatively larger intra-speaker inconsistencies in the temporal dynamics of the dysarthric speech  . These inconsistencies result in poor audible quality for the dysarthric speech, and in low phone/speech recognition accuracy. Further, collecting and labeling the dysarthric speech is extremely difficult considering the small number of people with these disorders, and the difficulty in labeling the database due to the poor quality of the speech. Hence, it would be of great interest to explore on how to improve the efficiency of the acoustic models built on small dysarthric speech databases such as Nemours , or use speech databases collected from a normative population to build acoustic models for dysarthric speakers. In this work, we explore the latter approach. © 2013 Springer-Verlag.
cited By (since 1996)0; Conference of org.apache.xalan.xsltc.dom.DOMAdapter@2ad7b3e1 ; Conference Date: org.apache.xalan.xsltc.dom.DOMAdapter@37628ec0 Through org.apache.xalan.xsltc.dom.DOMAdapter@2789fba8; Conference Code:97766
K. K. George and Dr. Santhosh Kumar C., “Towards enhancing the acoustic models for dysarthric speech”, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8025 LNCS, pp. 183-188, 2013.