Related projects
Discover more projects across a range of sectors and discipline — from AI to cleantech to social innovation.
Machine learning (ML) has recently achieved impressive success in many applications. As ML starts to penetrate into safety-critical domains, security/robustness concerns on ML systems have received lots of attention lately. Very surprisingly, recent work has shown that current ML models are vulnerable to adversarial attacks, e.g. by perturbing the input slightly ML models can be manipulated to output completely unexpected results. Many attack and defence algorithms have been developed in the field under the convenient but questionable Lp attack model. We study an alternative attack model based on the Wasserstein distance, which has rich geometric meaning and is better aligned with human perceptron. Existing algorithm for computing Wasserstein adversarial example is very time-consuming. The goal of this project is to significantly speed up the generation process for Wasserstein adversarial examples by carefully reformulating the problem and by exploiting better optimization techniques.
Yaoliang Yu
Kaiwen Wu
Borealis AI
Computer science
Finance, insurance and business
University of Waterloo
Accelerate
Discover more projects across a range of sectors and discipline — from AI to cleantech to social innovation.
Find the perfect opportunity to put your academic skills and knowledge into practice!
Find ProjectsThe strong support from governments across Canada, international partners, universities, colleges, companies, and community organizations has enabled Mitacs to focus on the core idea that talent and partnerships power innovation — and innovation creates a better future.