- Main
Automating validation of learning and decision making models using theCogniBench framework
Abstract
Much of cognitive science is based on constructing, validating, and comparing formal models of the mind. Whereascoming up with new and useful models requires expertise and creativity, validating the proposed models and comparingthem against the state-of-the-art mainly requires a systematic, rigorous approach. The task of model validation is thereforeparticularly well-suited for the types of automation that have propelled other research fields (cf. impact of bioinformaticson biology). Here we propose a model benchmarking framework implemented as an open-source Python package namedCogniBench. Given a set of candidate models (which can be implemented in various languages), experimental obser-vations, and scoring criteria, CogniBench automatically performs model benchmarks and reports the resulting matrix ofscores. We demonstrate the potential of the proposed framework by applying it in the domain of learning and decisionmaking, which poses unique requirements for model validation.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-