Interpretability of machine learning models that predict cognitive impairment from human speech and language

Machine learning has great potential in detecting cognitive, mental and functional health disorders from speech, as acoustic properties of speech and corresponding patterns in language are modified by a variety of health-related effects. Specifically, neural language models, have recently demonstrated impressive abilities in tasks involving linguistic knowledge. Their success in language understanding and classification tasks could be attributed to their effective representations of linguistic knowledge. However, the increasing complexity of the state-of-the-art models make them behave in a black box manner when the models are not easily interpretable. The successful adoption of machine learning models in healthcare applications relies heavily on how well decision makers are able to understand and trust their functionality. Only if decision makers have a clear understanding of the model behavior, can they diagnose errors and potential biases in these models, and decide when and how much to rely on them. As such, it is important to create techniques for explaining black box models in a human interpretable manner.

Faculty Supervisor:

Frank Rudzicz;Andrei Badescu


Malikeh Ehghaghi


WinterLight Labs Inc


Computer science



University of Toronto



Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects