Back close

The breakthrough AI technology developed at the AI and Robotics Research Center, part of HuT Labs at Amrita

May 12, 2025 - 11:53
The breakthrough AI technology developed at the AI and Robotics Research Center, part of HuT Labs at Amrita

Rohan Singh (name changed) was living the perfect life. He did not need social media validation to prove this. The energetic, 35-year-old, is a banker with an MNC company, had it all–an aspirational salary, a loving family with two children, and a beautiful sea-facing home.

Suddenly, just two days after his birthday, his life turned upside down. He suffered a stroke and ended up in a hospital bed. He suffered paralysis and, apart from physical, cognitive, and emotional impairments, he realised he could not speak. His children were telling him so many things, what they did during the day, he could not move his hands, and he could not utter a word. He was devastated.

And then came the barrage of financial issues—how to tell his wife about the bank account, the balance, insurance payment, the car loan, the credit cards, and the other financial commitments. Like several others, Rohan had not visualised this situation, ever. He had not discussed any finances with his wife. He was managing all of it on his own.

At Amrita Vishwa Vidyapeetham, the mission is compassion-driven research that leads to sustainable solutions for the benefit of mankind.

For Rohan and many others like him, there is hope.

T2HInnovations, a startup established under the AI and Robotics Research Center, part of HuT Labs at Amrita, has developed Netravaad, an innovative technology that utilises advanced AI algorithms to convert eye signs/gestures to words and sentences to help patients with speech impairment–who have lost the ability to speak after a stroke or trauma.

T2HInnovations (https://t2hinnovations.com/) is committed to biomedical innovation, specialising in healthcare, rehabilitation, and assistive devices, alongside patient-centric technologies aimed at improving the quality of life.

T2HInnovations operates in close collaboration with Amrita Vishwa Vidyapeetham’s research facilities, leveraging the Amrita Vishwa Vidyapeetham’s expertise in engineering and technology to develop impactful solutions such as Netravaad.

Netravaad, tested rigorously and frequently at Amrita Institute of Medical Sciences (AIMS) in Kochi, with the ability to enable communication through eye gestures, represents a major leap for individuals with speech impairments.

Its advanced AI algorithms convert eye signs/gestures to words and sentences to enable patients who have lost the ability to speak after a stroke or trauma.

Dr. Rajesh Kannan Megalingam, Director, HuT Labs at Amrita Vishwa Vidyapeetham and Founder, T2H Innovations, said, “We are now in the production phase of Netravaad and all set to bring this transformative technology to a wider audience.”

Dr. Rajesh reminisces about the journey of Netravaad, and says, “It was Amma (Mata Amritanandamayi Devi, Chancellor of Amrita Vishwa Vidyapeetham) who planted the seed of this innovation. She wanted to help people who suffered from paralysis and speech impairment after a stroke or other ailment.

Netravaad is a cutting-edge concept that turns eye movements into words and sentences in three languages—English, Hindi and Malayalam. Dr. Rajesh says, “We are currently in the process of extending this technology to other regional languages so that it benefits a larger group of people.”

Challenges for Netravaad: Taking the tough path  

The T2HInnovations team faced several hurdles during their journey. Patients on medication experienced difficulties using the device, especially in the initial phase. They were not mentally ready to deal with this device. Some patients were not cooperative in the initial period.

Training program 

The team was determined not to be dejected and found solutions to all the hurdles. They introduced an initial training procedure to help patients adapt to the device.

Netravaad team conducts an eye-sign training video in three languages: English, Hindi and Malayalam that teaches the patient to communicate words like turn left/right, food, drinking water, sleep, medicine, pain, among others.

Testing & Improvements

The team conducted rigorous testing and identified some crucial issues. Based on the feedback, the team made two key modifications:

  1. User Interface Enhancement
  2. Design Structure Modification

AI Algorithm Improvement

One of the main improvements was to accommodate variations in different age groups, illuminance, and the position of the patient.

To address these issues, Dr Rajesh’s team fine-tuned the AI algorithm and added more data to the existing dataset. Additionally, the graphical user interface was made more user-friendly with multiple versions, based on feedback from caretakers, healthy subjects, and doctors.

Dr Rajesh is happy when he says, “We improved the AI algorithm, and the Graphical User Interface based on the feedback we received from different test cases and testing with patients. The team modified and fine-tuned the dataset and adjusted the Deep Learning algorithm based on variations in age groups, illuminance, and the condition of the patient.

Illuminance & Environmental Factors: 

The illuminance caused issues due to varying lighting conditions. For example, normal sunlight can differ from morning to evening. Additionally, at night, or when indoors, we rely on artificial lights, which also have different flux levels.

Illuminance is the measurement of the amount of light falling onto (illuminating) and spreading over a given surface area. It also correlates with how humans perceive the brightness of an illuminated area, and other environmental factors were addressed and resolved. The scientists encountered this issue during testing and addressed it by improving the dataset, adding images under various illumination conditions.

Initially, the Netravaad team started with an OpenCV-based method and gradually moved to Deep Learning. As a result, the team achieved consistent performance across varying levels of illuminance. The major environmental factor affecting the device is light, but light can be influenced by other climatic conditions, especially when it depends on sunlight.

Currently, the device performs consistently under varying sunlight conditions, as well as when using external light.

High performer with low dependencies 

Light is a necessary condition. If there isn’t enough sunlight or if it is dark, the device can then rely on room light with different flux levels, and it will continue to deliver consistent performance. Other than this, the device does not have any environmental dependencies.

Design Readiness 

The intuitive design ensures that individuals with physical and cognitive challenges can easily navigate and utilise the application.

Dr Rajesh says, “The other issue we have dealt with is the age factor. When considering different age categories, especially individuals over 60 years old, we initially did not achieve good accuracy for this age group. To address this, we improved our dataset by adding more images of older individuals.

During testing, the medical condition of the patient also plays a role. For example, immediately after a stroke, patients are on medication and find it difficult to use the device due to their mental condition and other health issues. To address this problem, the team changed its approach.

Training Program 

To begin with, the team started with some training procedures for a few days, and only after observing improvements in the patient did they introduce the device. During the training process, the device was not used.

Additionally, the team adopted a strategy where they do not consider individuals who are still under medication after a stroke or other major ailment. These patients are exposed to the training and the device only after their condition improves after medication and requisite rest.

Ability to generate numbers and sentences 

The system includes two modes in each language: Mode 1 and Mode 2.

Mode 1 features 10 predefined words;

while Mode 2 includes 26 letters in English and 29 letters in both Malayalam and Hindi. Additionally, users can create sentences and generate numbers through these modes, enhancing their ability to communicate effectively.

Infrastructure and Cost  

The cost of the device is Rs. 1,65,000. Since similar products are not available in India, the team could ship the device across the country. No separate infrastructure needs to be built for manufacturing.

Usability for Children

The device has been tested for children. Initially, children would need training to use the device, like learning a new language. With gradual learning, they were able to adapt.

Market comparison 

Compared to similar products in the market, Netravaad enables the user to regain full communication capability through the eyes, whereas other similar devices provide only a limited communication capability.

For example, if a teacher has lost his/her ability to speak, Netravaad enables him/her to take classes as usual, which no other device available in the market is capable of.

Market & Expansion

The device is designed for both Indian and international users and is not restricted to any specific region.

When asked whether he is expecting some sort of support from the government, Dr. Rajesh says, “If the government provides financial support for mass production, we would consider it a great opportunity.

About HuT Labs 

HuT Labs (Humanitarian Technology Labs https://www.amrita.edu/center/humanitarian-technology-hut-labs/) at Amrita Vishwa Vidyapeetham is an engineering research lab using robotics for social causes. It is headed by Prof. Rajesh Kannan Megalingam (https://www.amrita.edu/faculty/rajeshm/) of the Department of Electronics and Communication Engineering.

HuT Labs was established in 2012. Within a year of establishment, Hut Labs carried out significant research work with more than 10 research paper publications and currently has more than 220 publications in reputed journals, international conferences, etc.

With more than 50 national and international achievements and recognitions, HuT Labs is one of the top research labs in India in terms of using technology for humanitarian cause.

Robots for Humanity

  • Self-E, a self-driving wheelchair robot;
  • Amaran, a coconut tree climbing and harvesting robot;
  • Mudra, a hand gesture-based wheelchair robot;
  • Paripreksya, a search and rescue robot for disaster management;
  • CHETAK, a self-governing and multi-tasking home assistance robot, etc., are some of the significant robots built at HuT Labs.

About T2HInnovations

T2HInnovations (https://t2hinnovations.com/) is a Kerala-based startup committed to biomedical innovation, specialising in healthcare, rehabilitation, and assistive devices, alongside patient-centric technologies aimed at improving the quality of life.

Established under the HuT Labs program of Amrita Vishwa Vidyapeetham, T2HInnovations collaborates closely with the Amrita Vishwa Vidyapeetham’s research facilities to bring impactful healthcare solutions to the market.

Watch Netravaad

Share this story

Admissions Apply Now