Today's computer vision methods attempt to solve problems ranging from image classification to semantic segmentation. While some of these models are quite effective at their tasks, the most effective ones require a training set complete with a large set of heavily annotated data, but these large datasets and their annotations do not come without a cost. Dataset curators spend countless hours collecting data and even more data annotating it with semantic labels suitable for training today's methods. The expenses associated with data collection and annotation grow exponentially as the data becomes increasingly more scientific and difficult to annotate. While any lay person can take a photo of a dog and label it, collecting videos of the ocean floor and labelling the species in those videos can only be done with a budget sufficient to compensate a team of expert marine scientists. These costs motivate computer vision methods that can learn from less data, cheaper annotations, and less supervision. This thesis aims to provide some of these methods. We first introduce Point-supervised Class Activation Maps (PCAMs) to aid in semantic segmentation of images given only point level labels. Then, we introduce the Dataset for Underwater Invertebrate and Substrate Analysis (DUSIA), which comes with a limited set of partial labels. To address the challenges associated with learning from those labels, we train the Context Driven Detector with a Negative Region Dropping method, which enables better performance given partial labels. Finally, we introduce Context-Matched Collages as a means for generating additional training samples at a relatively low cost, leading to state of the art object detection performance on DUSIA.