Efficient Deep Learning Methods that Only Require Few Labeled Data
In this postdoc, we plan to focus on computer vision tasks where existing deep learning methods require lots of labeled samples to work well. Acquiring labeled samples is time-consuming and often impractical. Thus, we investigate three different classes of methods to alleviate the label scarcity problem: active learning, weakly-supervised learning, and few-shot learning. In active learning, the goal is to label the most important samples to maximize the performance of the model while reducing labeling costs. In weakly supervised learning, the goal is to train models using weak labels. As a result, we can annotate samples with incomplete labels that are cheaper to collect. For instance, instead of annotating full masks for the task of segmentation, a cheaper alternative is to only annotate a single point per object. In the few-shot setup, the goal is to have models that can perform a task with only a few labeled samples. For these problem setups, we plan to propose new methods by building on our previous work and on promising methods that exist in the literature. Having deep learning models that can learn well with few labeled data can lead to significant contributions to the scientific community and many real-life applications.