Novel Corrective and Training Procedures for Neural Network Compliance

In AI safety, compliance ensures that a model adheres to operational specifications at runtime to avoid adverse events for the end user. This proposal looks at obtaining model compliance in two ways: (i) applying corrective measures to a non-compliant Machine Learning (ML) model and (ii) ensuring compliance throughout the model’s training process. We aim to achieve the first via removal of gradient information related to features involved in biasing the model.

Dynamic Deep Generative Graph Models for Financial Forecasting

Borealis AI has access to a huge amount of financial data related to the stock market and is interested in leveraging recent developments in machine learning to better understand this data. Some potential questions emerging from this data are: (1) Given the closing price of a stock in the recent months, can we predict the stock returns within the next month? (2) If a stock crisis occurs, can we predict and control the spread of the crisis? (3) Given the current stock’s history, can we help reduce the risk of investment?.

Fast and Accurate Computation of Wasserstein Adversarial Examples

Machine learning (ML) has recently achieved impressive success in many applications. As ML starts to penetrate into safety-critical domains, security/robustness concerns on ML systems have received lots of attention lately. Very surprisingly, recent work has shown that current ML models are vulnerable to adversarial attacks, e.g. by perturbing the input slightly ML models can be manipulated to output completely unexpected results. Many attack and defence algorithms have been developed in the field under the convenient but questionable Lp attack model.

Investigating multi-task learning in semantic parsing

Current research in semantic parsing suffers from lack of annotated data, which is hard to acquire. In this project, we aim at tackling the problem of converting natural language utterances to SQL language (Text-to-SQL) on complex databases in a low-resource environment. More specifically, we focus on the research of how multi-task learning (MTL) can help in this task. We will first identify the related natural language processing (NLP) tasks that can contribute to improving the performance of semantic parsing.

Improving Efficiency and Robustness of Model-based Reinforcement Learning

Model-based reinforcement learning allows AI systems to learn and use predictive models of their environments to plan ahead, achieving tasks more efficiently. The proposed project aims to (i) develop methods for identifying when an uncertain and/or flawed model can be relied on to make plans, and when it cannot, and (ii) implement a method which allows an AI system to explore its environment exactly when exploration will be most useful for improving its model-based predictions and plans.

Wide-baseline Novel Scene Synthesis from a Single Image

Novel view synthesis is the process of generating new images from an unseen perspective, given at least one image of a scene. There may be more than one probable novel view associated with each unseen perspective, an assumption made by existing methods. This simplifying assumption prevents these methods from being applied to more difficult novel views where the set of probable novel views is highly varied. This project proposes to investigate a new approach to generate a wide variety novel views from a single image, and can produce multiple probable outputs.

Non-convex learning with stochastic algorithms

In recent years, deep learning has led to unprecedented advances in a wide range of applications including natural language processing, reinforcement learning, and speech recognition. Despite the abundance of empirical evidence highlighting the success of neural networks, the theoretical properties of deep learning remain poorly understood and have been a subject of active investigation. One foundational aspect of deep learning that has garnered great intrigue in recent years is the generalization behavior of neural networks, that is, the ability of a neural network to perform on unseen data.

3d density estimation using normalizing flows and its application to 3d reconstruction in cryo-EM

Generative models enable the researchers to address multiple problems spanning from noise removal to generating novel samples with properties of the domain. Generative models are commonly studied for images and in this project the idea will be expanded to 3D structures or volumes. Single-particle cryo-electron microscopy (cryo-EM) is a technique to estimate accurate 3D structures of biological molecules which is used by practitioners in fields like precision medicine. This allows them to design drugs that could cure patients with rare diseases and avoid side effects.

Study of the Latent Space in NLP: Mathematical Foundation and Application to Disentanglement

Recent progress on word and sentence embeddings has enabled efficient representation and learning of complex high dimensional probability distributions over rich text data. The proposed research aims at addressing some of the fundamental questions in this field: What are the natural mathematical structures on that latent spaces? How to find a meaningful basis? What is the best method of disentanglement for NLP?

Optimization of group equivariant convolutional networks

The explosion of popularity of deep learning owes a lot to the success of convolutional neural networks, widely used in diverse fields including computer vision and natural language processing. Recently, the group equivariant convolutional neural network (G-CNN) was introduced, where equivariance of symmetries inherent in the data set is built in the architecture of the networks.