Machine Learning May Help Understand Neurodegenerative Diseases
Facial recognition and analysis, and the machine learning techniques behind them, have many applications, from affirming identity documents to unlocking mobile phones. Now this technology may have the power to help doctors better understand and diagnose neurodegenerative diseases.
Diego Guarín, assistant professor in the biomedical engineering program, has been working on using facial recognition and analysis and machine learning algorithms to evaluate the effects of different neurodegenerative and motor disorders in speech production and facial movements.
The overall goal of Guarín’s research is to use facial recognition techniques to analyze if patients have neurodegenerative or motor disorders, such as Bell’s palsy, Parkinson’s disease and ALS. Using this new technology would allow doctors to provide more effective care to individuals who are developing these diseases, as well as to track disease progression and provide a baseline to see if those with the disease are improving with treatment.
Using an artificial neural network to find a specific location on the face, Guarín then uses the recognition technology to track how the person moves their face. His experiments are currently performed in well-controlled environments, such as a room with a good camera, low noise, and proper illumination. Guarín’s next work will involve testing these approaches to detect neurodegenerative diseases in home environments during a video call.
“The patients are going to perform certain exercises in front of the camera, such as opening their mouth and saying some sentences,” he said. “The computer is going to track how the patient moves their mouth and face, and then we’re going to have an idea of if that movement belongs to a healthy individual in their age group or if that movement belongs to a person with a specific disease.”
Previous testing has been successful. Guarín’s techniques were able to recognize the diseases of individuals based of photos and video recorded in laboratory and clinical settings. This was detailed in a paper, “The Auto-eFACE: Machine Learning–Enhanced Program Yields Automated Facial Palsy Assessment Tool,” involving Guarín and researchers from Harvard University, published in Plastic and Reconstructive Surgery, the journal of the American Society of Plastic Surgeons. In the study, Auto-eFACE, a software developed by Guarín to automatically analyze and score faces based on static and dynamic symmetry, differentiated normal faces from those with facial palsy. The report also noted that the Auto-eFACE’s scores were comparable to scores provided by expert clinicians, opening the door to automatic facial analysis in facial palsy.
In another study, “A New Dataset for Facial Motion Analysis in Individuals with Neurological Disorders,” published in the April edition of IEEE Journal of Biomedical and Health Informatics and involving Guarín and researchers from the Toronto Rehabilitation Institute, University of Toronto, and Simon Fraser University, Guarín and his coauthors introduced the first public dataset with videos of orofacial gestures – movements related to the mouth and face – performed by individuals with orofacial impairment due to neurological disorders such as amyotrophic lateral sclerosis (ALS) and stroke. The report also introduced a technique to automatically evaluate the disease severity based on facial movements estimated from videos.
“We are learning how to use information extracted from simple videos to make clinical diagnoses, and this research is very appealing because the data is very easy to measure,” Guarín said. “Everyone sits down in front of the computer to have Zoom meetings, so why not sit down in front of the computer to have an assessment?”