Learning abstract causes from text

Consider a question that a policymaker might have: which economic factors have causal effects on the median housing price in a region? Answering this question requires gathering historical observations of house prices and economic factors of interest and performing statistical analysis to asses causal effects. But what if the policymaker does not exactly know which economic factors are relevant? What if they cannot afford to measure some of them?

Learning causal world models at scale

In order to navigate the world, an autonomous agent must build a causal model to understand the effects of its actions. In many tasks (automated car driving, automated medicine), collecting causal data, by performing arbitrary actions for the sake of measuring their effect (interventions), can be impractical, expensive and even unethical. On the other hand, collecting data by observing human agents (observations), is often much cheaper, but it does not allow for measuring causal effects.

Large-scale transformers for probabilistic time series forecasting

Making future projections about quantities of interest is a key component of decision-making, which has broad applications. For instance, in healthcare, one may be interested in monitoring the severity level of a disease given a treatment plan, while carefully accounting for potential sources of uncertainty. Alternatively, one may be interested in predicting the occupancy level of a data center or of a customer support office throughout the week. This project aims to develop methods, based on deep neural networks, to make such predictions from data.

Dialogue Systems that Help Scientists Write the Literature Review of their Research Papers

This project will involve the creation of intelligent tools to help scientists write the literature reviews of their scientific papers. Leveraging the extensive experience of the co-creator of the Toronto Paper Matching System (TPMS) which is widely used by most Machine Learning conferences, we will create intelligent tools to provide suggestions for a scientist to write the literature review of their paper, finding relevant work and placing that relevant work in context with the current paper.

Composing without forgetting

In this project, we propose a modular continual learning approach to face the problem of catastrophic forgettingand transfer in learning from evolving task distributions. Concretely, we propose a model that learns how to selectmost relevant modules based on a local decision rule for a given task to form a deep learning model for solving agiven task. In this framework we generalization to unseen but related tasks emerge through the composition ofthose modules.