Fast and Accurate Computation of Wasserstein Adversarial Examples

Machine learning (ML) has recently achieved impressive success in many applications. As ML starts to penetrate into safety-critical domains, security/robustness concerns on ML systems have received lots of attention lately. Very surprisingly, recent work has shown that current ML models are vulnerable to adversarial attacks, e.g. by perturbing the input slightly ML models can be manipulated to output completely unexpected results. Many attack and defence algorithms have been developed in the field under the convenient but questionable Lp attack model. We study an alternative attack model based on the Wasserstein distance, which has rich geometric meaning and is better aligned with human perceptron. Existing algorithm for computing Wasserstein adversarial example is very time-consuming. The goal of this project is to significantly speed up the generation process for Wasserstein adversarial examples by carefully reformulating the problem and by exploiting better optimization techniques.

Intern: 
Kaiwen Wu
Faculty Supervisor: 
Yaoliang Yu
Province: 
Ontario
Partner: 
Partner University: 
Discipline: 
Program: