Machine learning systems are increasingly a part of human lives, and so it is increasingly important to understand the similarities and differences between human intelligence and machine intelligence. However, as machine learning systems are applied to more complex problem settings, understanding them becomes more challenging, and their performance, correctness, and reliability become increasingly difficult to guarantee. Moreover, "human-level performance" in such settings is often itself not well-defined, as many of the cognitive mechanisms underlying human behavior remain opaque. This dissertation bridges gaps in our understanding of human and machine intelligence using cross-disciplinary insights from cognitive science and machine learning.
First, I develop two frameworks that borrow methodologically from cognitive science to identify deviations in the expected behavior of machine learning systems. Second, I forge a connection between a classical approach to building computational models of human cognition, hierarchical modeling, and a recent technique for small-sample learning in machine learning, meta-learning. I use this connection to develop algorithmic improvements to machine learning systems on established benchmarks and in new settings that highlight their inability to come close to human standards. Finally, I argue that machine learning should borrow methodologically from cognitive science, as both are now tasked with studying opaque learning and decision-making systems. I use this perspective to construct a computational model of machine learning systems that allows us to formalize and test hypotheses about how these systems operate.