PhD Defense: Exploring the Power of Machine Learning in Medical Research: A Focus on Movement Disorder Diagnosis and Age-Related Hearing Loss
IRB IRB-4107
The medical research field is experiencing a remarkable evolution due to the application of data science and machine learning techniques and new developments in these fields. The accessibility of large datasets, enhanced computing capabilities, and advanced algorithms have opened up new possibilities to extract valuable insights, identify patterns, and develop predictive models from complex biomedical data. This integration has the potential to revolutionize medical research, resulting in enhanced diagnostic capabilities, personalized treatment approaches, and ultimately, improved patient care.
In this dissertation, I explore the impact of data science and machine learning in medical research, with a specific focus on the diagnosis of movement disorders and age-related hearing loss. In the first of these domain areas, I use data from a wearable sensor to accurately identify individuals with Parkinson’s disease based on their movements during several motor tasks. I demonstrate that applying machine learning to wearable sensor data can achieve diagnostic accuracy surpassing that of movement disorder experts in routine clinical settings for differentiating Parkinson’s disease from controls and is comparable to expert clinicians for distinguishing Parkinson’s disease from other parkinsonian disorders. I also found that repeating mobility tasks is unnecessary for improving diagnostic accuracy. I propose several steps to simplify mobility test protocols, which can save time and effort for both clinicians and participants without compromising accuracy. Specifically, using a single sensor, a single mobility task, and just one trial of each task for the classification tasks explored in this study can streamline the process. This approach facilitates the practical application of wearable sensors as a diagnostic tool in clinical settings.
In the second domain area, I study age-related hearing loss by constructing ensemble models to examine data from participants with diverse ages and varying degrees of hearing loss. By integrating audiometric, perceptual, electrophysiological, and cognitive data, I predict speech perception in challenging auditory conditions like noise, reverberation, and time compression. Leveraging machine learning techniques, my objective is to identify the variables that are highly predictive of demanding speech-perception conditions, thereby confirming existing associations and potentially uncovering novel ones. The findings underscore the critical role of audiometric thresholds, particularly within the 1–4 kHz range, and emphasize the utility of composite variables spanning multiple frequencies in accurately predicting speech perception. Furthermore, basic temporal processing ability demonstrates a moderate influence, whereas cognitive factors and extended high-frequency thresholds exhibit limited to negligible predictive capability in this context. Continued research and exploration of associations will contribute to a deeper understanding of the complex interplay between speech perception, aging, hearing loss, and cognition.