Neuroscience and machine learning often operate at two ends of a spectrum. The former sometimes finds itself entrenched in the details of experimentation, and the latter sometimes finds itself drifting into the expanse of theory. Both fields can mutually coexist, and when they do, have produced invaluable results in computational neuroscience towards more plausible models of biological solutions. This dissertation presents two detailed investigations into the benefits of this interdisciplinary field: a model for cognition and a model for vision. Experiments during these investigations led us to a third result: a new learning approach called neural network tomography. We introduce our universal theory of cognition, Confabulation Theory, and discuss its biological plausibility. Confabulation Theory posits that the cerebral cortex, in conjunction with the thalamus, is implementing a repeated functional architecture of thalamocortical modules, each encoding one attribute which an object in the individual's mental universe may possess. These modules are interconnected with concurrence statistics called knowledge links, are capable of confabulating a state, and are carefully controlled with action commands. We use Confabulation Theory to build a model for natural language processing and present striking results in sentence generation with context. Subsequently, we focus on the task of texture classification, which we argue is a more primitive operation than object recognition, and therefore, appropriate for investigation with the goal of elucidating biology's solution for processing visual stimuli. We develop a hierarchical model for texture classification, carefully informed by neuroscience results, and demonstrate state-of-the-art performance on a challenging texture classification dataset in the context of our human psychophysical experiment. Finally, we survey existing methods in neural network learning and propose a new approach with several valuable theoretical advantages. By rephrasing the task of function approximation as replicating the topology and weights of an existing universal approximator network, we show that several of the drawbacks of classical backpropagation learning can be avoided. We define a new objective function, mean squared curvature (MSC), and demonstrate that minimizing the MSC of the difference between the networks during the replication process produces favorable results and allows networks to be reverse-engineered iteratively