Uncertainty Quantification for Deep Neural Networks

Deep neural networks are effective at image classification and other types of predictive tasks, achieving higher accuracy than conventional machine learning methods. However, unlike these other methods, the predictions are less interpretable. While accuracy may be enough for applications where errors are not costly, for real world applications, we want to also know when the predictions are more likely to be correct. Estimating the likelihood that a prediction is correct is called confidence, or uncertainty. In order to deploy these methods in a public sphere, we need to better understand why they make the predictions. This project will focus on one aspect of understanding: developing methods to estimate the uncertainty associated with a given prediction. This research will allow us to be more confident when using the predictions of the models.

Mariana Prazeres;Aram Pooladian;Ryan Campbell
Faculty Supervisor: 
Adam Oberman
Partner University: