Skip to main content
eScholarship
Open Access Publications from the University of California

Automating validation of learning and decision making models using theCogniBench framework

Creative Commons 'BY' version 4.0 license
Abstract

Much of cognitive science is based on constructing, validating, and comparing formal models of the mind. Whereascoming up with new and useful models requires expertise and creativity, validating the proposed models and comparingthem against the state-of-the-art mainly requires a systematic, rigorous approach. The task of model validation is thereforeparticularly well-suited for the types of automation that have propelled other research fields (cf. impact of bioinformaticson biology). Here we propose a model benchmarking framework implemented as an open-source Python package namedCogniBench. Given a set of candidate models (which can be implemented in various languages), experimental obser-vations, and scoring criteria, CogniBench automatically performs model benchmarks and reports the resulting matrix ofscores. We demonstrate the potential of the proposed framework by applying it in the domain of learning and decisionmaking, which poses unique requirements for model validation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View