This fundamental research project investigates semantic visual navigation tasks, such as asking a household robot to “go find my keys”. We seek to enhance the efficacy of repeated search tasks within the same environment, by explicitly building, maintaining, and exploiting a map of locations that the robot had previously explored. We also seek to exploit prior location-tolocation, object-within-location, and object-to-object relationships from similar environments (e.g. within a common cultural region) to improve semantic visual navigation in unseen environments.
Optimal trade execution is a well-known problem in quantitative finance. It helps financial actors who trade large quantities of a given asset minimize their risk and their adverse price impact. The problem’s complexity is multiplied when
considering highly fragmented markets, such as those existing today for digital assets. The most recent advances in reinforcement learning and deep learning open the door to a new class of execution algorithms. This data-driven algorithm class reliefs many assumptions from the classical solutions coming from stochastic optimal control theory.
The goal of the project is to develop a predictive model that will help to make a better estimate of the relative altitude using only the barometer sensor inside the Notio device. This measurement of a precise relative altitude is crucial since it is used to compute the slope, which is an important data for the cyclist and will make the Notio even better. The difficulty of this task comes from the fact that the measurement of the altitude from the sensor has a tendency to drift and thus the computed slope also does.
In this project, we propose a continual learning approach to face the problem of catastrophic forgetting in online image classification problems. Concretely, we propose a model that learns how to mask a series of general modules in a deep learning architecture, so that generalization emerges through the composition of those modules. This is of vital importance for Element AI to provide reusable solutions that scale with new data, without the need of learning a new model for every problem and improving the overall performance.
The purpose of this project is to allow the company to have access to useful insurance representations, encoding the diversity of contexts found in larger markets. This is expected to boost the performance of predictions for tasks learned in small data and highly variable target setting.
This project aims at evaluating whether recent results in deep learning models, trained to exploit weak labels can serve to extract meaningful lesion localizations from image-level labels, either from individual scans or given a (longitudinal) sequence thereof. To this end, we will scale up existing models that have been shown to work on 2D images to a 3D context, studying labeling performance as the dataset size grows.
La consommation énergétique des bâtiments représente à elle seule près de 40% de la consommation énergétique mondiale, et plus de 30% des émissions annuelles de gaz à effet de serre. Dans cet ordre d’idée, l’optimisation du contrôle des chauffages, de la ventilation et de la climatisation (CVC) représente un enjeu majeur pour le secteur énergétique actuel. Le présent projet met en place un système de contrôle automatisé à base d’intelligence artificielle, spécialement entraîné par renforcement positif sur des données en temps réel et par rapport à l’impact de ses décisions.
Creating a non-person character (NPC) to play a game is becoming increasingly important. NPCs can be used in quality assurance to test a game before sending the game for certification. Being able to test a game in a way that mimics a human player would allow the test to be more accurate and would help in discovering design and implementation errors resulting in time and cost savings. Recent research work reported in the literature have focused on skill-based games.
The project aims to facilitate the research and development of new drugs by exploring deep learning methods to process molecules and to generate new molecules. The deep learning models that will be experimented include few shot learning, generative adversarial network, and variational autoencoder. We would like to improve these methods specifically for pharmacological datasets, which are vastly different from many common, public dataset used in academic research on the aforementioned models.