Uncertainty Quantification for Deep Neural Networks

Deep neural networks are effective at image classification and other types of predictive tasks, achieving higher accuracy than conventional machine learning methods. However, unlike these other methods, the predictions are less interpretable. While accuracy may be enough for applications where errors are not costly, for real world applications, we want to also know when the predictions are more likely to be correct. Estimating the likelihood that a prediction is correct is called confidence, or uncertainty. In order to deploy these methods in a public sphere, we need to better understand why they make the predictions. This project will focus on one aspect of understanding: developing methods to estimate the uncertainty associated with a given prediction. This research will allow us to be more confident when using the predictions of the models.

Faculty Supervisor:

Adam Oberman

Student:

Mariana Prazeres;Aram Pooladian;Ryan Campbell

Partner:

Fédération des caisses Desjardins

Discipline:

Mathematics

Sector:

University:

McGill University

Program:

Accelerate

Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects