Apprentissage automatique pour la construction de diagrammes de décision

L’optimisation combinatoire occupe une place prépondérante dans notre société actuelle. Que ce soit la logistique, le transport ou la gestion financière, tous ses domaines se retrouvent confrontés à des problèmes pour lesquels on recherche la meilleure solution. Cependant, un grand nombre de problèmes très complexes reste encore hors de portée des méthodes d’optimisation actuelles. C’est […]

Read More
Composing without forgetting

In this project, we propose a continual learning approach to face the problem of catastrophic forgetting in online image classification problems. Concretely, we propose a model that learns how to mask a series of general modules in a deep learning architecture, so that generalization emerges through the composition of those modules. This is of vital […]

Read More
Approximate Online Bilevel Optimization for Learning Data Augmentation

In this project we aim to automatically learn an augmenter network by using an approximate online bilevel optimization procedure. We plan to learn a augmenter network that generates a distribution of transformations that minimizes the loss on a validation set. By unfolding the gradients of the training loss, we will optimize the loss on validation […]

Read More
Link Prediction on Knowledge Graphs with Graph Neural Networks

Knowledge graphs store facts using relations between pairs of entities. In this work, we address the question of link prediction in knowledge graphs. Our general approach broadly follows neighborhood aggregation schemes such as that of Graph Convolutional Networks (GCN), which in turn was motivated by spectral graph convolutions. Our proposed model will aggregate information from […]

Read More
Graph-based learning and inference: models and algorithms

Learning from relational data is crucial for modeling the processes found in many application domains ranging from computational biology to social networks. In this project, we propose to work on developing modeling techniques that combine the advantages of the approaches found in two fields of study: Machine Learning (through graph neural networks) and Statistical Learning […]

Read More
Few-shot Generative Adversarial Networks

The most successful computer vision approaches are based on deep learning architectures, which typically require a large amount of labeled data. This can be impractical or expensive to acquire. Therefore, few-shot learning techniques were proposed to learn new concepts with just one or few annotated examples. However, unsupervised methods such as generative adversarial networks (GANs) […]

Read More
Separating Syntax and Semantics for Semantic Parsing

Study of language disorders, theoretical linguistics, and neuroscience suggests language competence involves two interacting systems, typically dubbed syntax and semantics. However few state-of-the-art deep-learning approaches for natural language processing explicitly model two different systems of representation. While achieving impressive performance on linguistic tasks, they commonly fail to generalize systematically. For example, unlike humans, learning a […]

Read More
Efficient Deep Learning Methods that Only Require Few Labeled Data

In this postdoc, we plan to focus on computer vision tasks where existing deep learning methods require lots of labeled samples to work well. Acquiring labeled samples is time-consuming and often impractical. Thus, we investigate three different classes of methods to alleviate the label scarcity problem: active learning, weakly-supervised learning, and few-shot learning. In active […]

Read More
Understanding Empirical Risk Minimization via Information Theory

Deep neural network (DNN) is a class of machine learning algorithms which is inspired by biological neural networks. Learning with deep neural networks has enjoyed huge empirical success in recent years across a wide variety of tasks. Lately many researchers in machine learning society have become interested in the generalization mystery: why do overparameterized DNN […]

Read More
An information-theoretic framework for understanding generalization in neural networks

Deep neural network (DNN) is a class of machine learning algorithms which is inspired by biological neural networks. DNNs are themselves general function approximations, which is the reason they can be applied to almost any machine learning problem. Their applications can be found in visual object recognition in computer vision, translating texts in unsupervised learning, […]

Read More