Dynamic models, such as agent-based models (ABMs), are becoming an increasingly common modelling tool in cognitivesciences. They enable cognitive scientists to explore how computational, analytic models scale up when placed in complex,interactive, and dynamic environments where agents can sequentially interact over time and in space. Frequently, ABMsare built to yield a particular behaviour (riots, echo chamber emergence, etc.). As such, some models may bake in thedesired behaviour. However, many models may yield this behaviour, making it difficult to discriminate between theadequacies of each computational model. The paper directly addresses this methodological challenge. We explore a casestudy (fisheries). Agents make decisions in this dynamic and complex environment. Given a rich data set against whichto calibrate and validate model predictions, we compare and contrast statistical, adaptive, and perfect agents. We showthat adaptive computational agents equal statistical agents in calibration and outperform them for validation. In addition,we show that perfect and random agents fare poorly. This provides a method for using dynamic, agent-based models tochoose between computational models