Research into Convolutional Neural Network (CNN) Explainability

Machine Learning is advancing at an astounding rate. It is powered by complex models such as deep neural networks (DNNs). These models have a wide range of real-world applications, in fields like Computer Vision, Natural Language Processing, Information Retrieval and others. But Machine Learning is not without some serious limitations and drawbacks. The most serious one is the lack of transparency in their inferences, which works against relying completely in these models and leaves users with little understanding of how particular decisions are made. In this research we will explore new ways to represent the evolution of a CNN during training, and how the artifacts generated by these representations can be traced back to the inputs they’ve used during training. We will also explore the issue of imbalance in datasets, from the perspective of Parallel Coordinates and how to visualize dataset imbalance using this technique. The preliminary prototypes are going to be measured through usability tests with experts within the partner organization.

Intern: 
Andreas Koenzen
Faculty Supervisor: 
Margaret-Anne Storey
Province: 
British Columbia
Discipline: 
Program: