Skip to main content
eScholarship
Open Access Publications from the University of California

Encoder-Decoder Neural Architectures for Fast Amortized Inference of CognitiveProcess Models

Abstract

Computational cognitive modeling offers a principled inter-pretation of the functional demands of cognitive systems andaffords quantitative fits to behavioral/brain data. Typically,cognitive modelers are interested in the fit of a model withparameters estimated using maximum likelihood or Bayesianmethods. However, the set of models with known likeli-hoods is dramatically smaller than the set of plausible gen-erative models. For all but some standard models (e.g., thedrift-diffusion model), lack of closed-form likelihoods typi-cally prevents using traditional Bayesian inference methods.Employing likelihood-free methods is a workaround in prac-tice. However, the computational complexity of these methodsis a bottleneck, since it requires many simulations for each pro-posed parameter set in posterior sampling schemes. Here, wepropose a method that learns an approximate likelihood overthe parameter space of interest by encapsulation into a convo-lutional neural network, affording fast parallel posterior sam-pling downstream after a one-off simulation cost is incurredfor training.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View