Currently, the most effective deep learning methods heavily rely on the availability of a large corpus of annotated data. However, such resources are not always accessible. In this seminar, I will discuss alternative paradigms that aim to make better use of both labelled and unlabelled data, drawing inspiration from certain properties of human learning. I will begin by describing our recent work on active learning, continual learning, and learning with label noise. If time permits, I will also discuss some new insights about local overfitting, which can occur even when overfitting (as traditionally defined)is not observed.