One of the fundamental problems in machine learning is training high-quality neural network models using small quantities of data. In many situations, obtaining more data might not be possible due to an inherent rarity, physical lack of the samples, financial costs, or time constraints. Under these circumstances, standard training approaches lead to significant overfitting thus prompting the recent emergence of few-shot (or low-shot) learning, a problem setting where every category in a task has only a few examples available. In the visual domain, the initial effort has been directed at the task of image classification, however promising results in classification have triggered recent advances in few-shot object detection, image segmentation, and video classification.
Humans are very good at learning new concepts given only few instances and in this work we are aiming to mimic that ability. The task of few-shot image classification is not in itself yet well-studied as the state-of-the-art approaches depend highly on the number of available categories and per-category examples of a task. Existing few-shot multiclass methods assume that the test set is closed, meaning that only images truly belonging to the categories seen during training can be present in the test set, which is a heavy restriction limiting the real-world applications of these methods. To ensure that few-shot methods will be able to translate well to real-world applications we focus on few-shot tasks containing data coming from both known and unknown categories: one-class and multiclass open-set scenarios. To efficiently learn methods for these scenarios we concentrate on meta-learning (or learning-to-learn) approaches that aim to facilitate model training in few-shot learning tasks.
In this thesis, we introduce four separate meta-learning methods for few-shot one-class and few-shot multiclass open-set settings and set new state-of-the-art performance levels on these tasks. Firstly, we introduce a novel task of few-shot one-class image classification, where the training set consists of only a few examples from a single category with a goal of differentiating examples coming from this category from examples coming from other, unknown categories. We propose and analyze three novel methods to solve this problem through meta-learning: one trained in multiple stages to predict weights of an SVM classifier, the second one learning a separate feature space dedicated for one-class classification, and the third one trained end-to-end to dynamically generate one-class neural network classifiers. In the second part, we concentrate on the impact the number of per-category examples in the training set has on the existing methods in the few-shot multiclass closed-set classification task. We introduce a novel dynamic meta-learning solution to enhance the performance of existing few-shot learning methods while keeping the model fast and lightweight. Finally, we discuss approaches combining the analysis and methods introduced in previous parts of the thesis and apply them to address a recently introduced task of few-shot multiclass open-set classification. Proposed approaches significantly outperform the recently introduced state-of-the-art method on the few-shot multiclass open-set classification task.