Computers that can understand and communicate in human languages would benefit a wide range of application domains, from finance, e-commerce, legal to health care. In recent years, deep learning has dramatically accelerated natural language processing research by allowing models to learn statistical patterns from massive amounts of data. However, current models are still weak in terms of their reasoning and abstraction ability. This shortcoming limits their robustness when facing natural environment changes or adversarial attacks.
Existing data lake systems lack the support for storing or discovery features that could be used with different ML projects.
These limitations negatively affect the process of decision-taking. Data scientists spend most of their time finding, preparing,
and integrating relevant data sets to finish analytics tasks. Feature discovery systems are needed to ease the process of building
data science pipelines to drive significant insights efficiently, effectively and fairly.
In AI safety, compliance ensures that a model adheres to operational specifications at runtime to avoid adverse events for the end user. This proposal looks at obtaining model compliance in two ways: (i) applying corrective measures to a non-compliant Machine Learning (ML) model and (ii) ensuring compliance throughout the model’s training process. We aim to achieve the first via removal of gradient information related to features involved in biasing the model.
Borealis AI has access to a huge amount of financial data related to the stock market and is interested in leveraging recent developments in machine learning to better understand this data. Some potential questions emerging from this data are: (1) Given the closing price of a stock in the recent months, can we predict the stock returns within the next month? (2) If a stock crisis occurs, can we predict and control the spread of the crisis? (3) Given the current stock’s history, can we help reduce the risk of investment?.
Machine learning (ML) has recently achieved impressive success in many applications. As ML starts to penetrate into safety-critical domains, security/robustness concerns on ML systems have received lots of attention lately. Very surprisingly, recent work has shown that current ML models are vulnerable to adversarial attacks, e.g. by perturbing the input slightly ML models can be manipulated to output completely unexpected results. Many attack and defence algorithms have been developed in the field under the convenient but questionable Lp attack model.
Current research in semantic parsing suffers from lack of annotated data, which is hard to acquire. In this project, we aim at tackling the problem of converting natural language utterances to SQL language (Text-to-SQL) on complex databases in a low-resource environment. More specifically, we focus on the research of how multi-task learning (MTL) can help in this task. We will first identify the related natural language processing (NLP) tasks that can contribute to improving the performance of semantic parsing.
Model-based reinforcement learning allows AI systems to learn and use predictive models of their environments to plan ahead, achieving tasks more efficiently. The proposed project aims to (i) develop methods for identifying when an uncertain and/or flawed model can be relied on to make plans, and when it cannot, and (ii) implement a method which allows an AI system to explore its environment exactly when exploration will be most useful for improving its model-based predictions and plans.
Novel view synthesis is the process of generating new images from an unseen perspective, given at least one image of a scene. There may be more than one probable novel view associated with each unseen perspective, an assumption made by existing methods. This simplifying assumption prevents these methods from being applied to more difficult novel views where the set of probable novel views is highly varied. This project proposes to investigate a new approach to generate a wide variety novel views from a single image, and can produce multiple probable outputs.
In recent years, deep learning has led to unprecedented advances in a wide range of applications including natural language processing, reinforcement learning, and speech recognition. Despite the abundance of empirical evidence highlighting the success of neural networks, the theoretical properties of deep learning remain poorly understood and have been a subject of active investigation. One foundational aspect of deep learning that has garnered great intrigue in recent years is the generalization behavior of neural networks, that is, the ability of a neural network to perform on unseen data.
Generative models enable the researchers to address multiple problems spanning from noise removal to generating novel samples with properties of the domain. Generative models are commonly studied for images and in this project the idea will be expanded to 3D structures or volumes. Single-particle cryo-electron microscopy (cryo-EM) is a technique to estimate accurate 3D structures of biological molecules which is used by practitioners in fields like precision medicine. This allows them to design drugs that could cure patients with rare diseases and avoid side effects.