AI Can Spot Early Signs of Alzheimer’s in Speech Patterns, Study Shows

New technologies that can capture subtle changes in a patient’s voice may help physicians diagnose cognitive impairment and Alzheimer’s disease before symptoms begin to show
Researchers used advanced machine learning and natural language processing (NLP) tools to assess speech patterns
Researchers used advanced machine learning and natural language processing (NLP) tools to assess speech patternsUnsplash
Published on

New technologies that can capture subtle changes in a patient’s voice may help physicians diagnose cognitive impairment and Alzheimer’s disease before symptoms begin to show, according to a UT Southwestern Medical Center researcher who led a study published in the Alzheimer’s Association publication Diagnosis, Assessment & Disease Monitoring.

“Our focus was on identifying subtle language and audio changes that are present in the very early stages of Alzheimer’s disease but not easily recognizable by family members or an individual’s primary care physician,” said Ihab Hajjar, M.D., Professor of Neurology at UT Southwestern’s Peter O’Donnell Jr. Brain Institute.

Researchers used advanced machine learning and natural language processing (NLP) tools to assess speech patterns in 206 people – 114 who met the criteria for mild cognitive decline and 92 who were unimpaired. The team then mapped those findings to commonly used biomarkers to determine their efficacy in measuring impairment.

Study participants, who were enrolled in a research program at Emory University in Atlanta, were given several standard cognitive assessments before being asked to record a spontaneous 1- to 2-minute description of artwork.

During the study, researchers spent fewer than 10 minutes capturing a patient’s voice recording. Traditional neuropsychological tests typically take several hours to administer. 

“The recorded descriptions of the picture provided us with an approximation of conversational abilities that we could study via artificial intelligence to determine speech motor control, idea density, grammatical complexity, and other speech features,” Dr. Hajjar said.

The research team compared the participants’ speech analytics to their cerebral spinal fluid samples and MRI scans to determine how accurately the digital voice biomarkers detected both mild cognitive impairment and Alzheimer’s disease status and progression.

“Prior to the development of machine learning and NLP, the detailed study of speech patterns in patients was extremely labor intensive and often not successful because the changes in the early stages are frequently undetectable to the human ear,” Dr. Hajjar said. “This novel method of testing performed well in detecting those with mild cognitive impairment and more specifically in identifying patients with evidence of Alzheimer’s disease – even when it cannot be easily detected using standard cognitive assessments.” 

During the study, researchers spent fewer than 10 minutes capturing a patient’s voice recording. Traditional neuropsychological tests typically take several hours to administer. 

“If confirmed with larger studies, the use of artificial intelligence and machine learning to study vocal recordings could provide primary care providers with an easy-to-perform screening tool for at-risk individuals,” Dr. Hajjar said. “Earlier diagnoses would give patients and families more time to plan for the future and give clinicians greater flexibility in recommending promising lifestyle interventions.” (PB/Newswise)

Hurry up! Join the Medical Internship 3.0 at MedBound!

Researchers used advanced machine learning and natural language processing (NLP) tools to assess speech patterns
The Experiment That Proved Water Could Kill You: Zohnerism Effect & Critical Thinking
logo
Medbound
www.medboundtimes.com