Hopfield nets are among the most commonly used models in machine learning and neuroscience today. In this thesis I explore two novel ways of applying this model in two different areas--Cognitive science and Machine Learning. In the first part of this thesis we motivate, derive, and specify a new model of the control processes that reconfigure mental resources for a change of task. We apply this new model to the hundred year old Stroop paradigm, and find that it uses fewer free parameters than other models that account for a similarly broad set of experimental results. In the second part of this thesis we introduce a novel model for solving a short-coming of the Hopfield Net: lack of recognition for memories that have been translated, rotated, or otherwise transformed. Taking inspiration from Arathorn's map seeking circuit approach, we develop a model that partially overcomes this hurdle, and outperforms current popular models in certain high-noise regimes.