Predictive processing, or predictive coding,1 is transforming
our knowledge of perception (Knill & Richards, 1996; Rao
& Ballard, 1999), the brain (Friston, 2018; Hohwy, 2013;
Knill & Pouget, 2004), and embodied cognition (Allen &
Friston, 2018; Clark, 2016; Gallagher & Allen, 2018; Seth,
2015). Predictive processing is a hierarchical
implementation of empirical Bayes, wherein the cognitive
system creates generative models of the world and tests its
hypotheses against incoming data. It is hierarchical insofar
as the predictions at one level are tested against incoming
signals from the lower level. The resulting prediction error,
the difference between the expectation and the incoming
data, is used to recalibrate the model in a process of
prediction error minimization. Predictions may be mediated
by pyramidal cells across the neocortex (Bastos et al., 2012;
Hawkins & Ahmad, 2016; Shipp et al., 2013). Andy Clark
has characterized predictive processing as creating a
“bootstrap heaven” (2016, p. 19), enabling the brain to
develop complex models of the world from limited data. This enables us to extract patterns from ambiguous signals
and establish hypotheses about how the world works.
The training signals that we get from the world are,
however, biased in all the same unsightly ways that our
societies are biased: by race, gender, socioeconomic status,
nationality, and sexual orientation. The problem is more
than a mere sampling bias. Our societies are replete with
prejudice biases that shape the ways we think, act, and
perceive. Indeed, a similar problem arises in machine
learning applications when they are inadvertently trained on
socially biased data (Avery, 2019; N. T. Lee, 2018). The
basic principle in operation here is “garbage in, garbage
out”: a predictive system that is trained on socially biased
data will be systematically biased in those same ways.
Unfortunately, we are unwittingly trained on this
prejudiced data from our earliest years. As predictive
systems, we bootstrap upwards into more complex cognitive
processes while being fed prejudiced data, spiraling us into
a “bootstrap hell.” This has repercussions for everything
from higher-order cognitive processes down to basic
perceptual processes. Perceptual racial biases include
perceiving greater diversity and nuance in the faces of racial
ingroup faces (the cross-race effect; Malpass & Kravitz,
1969), misperceiving actions of racial outgroup members as
hostile (Pietraszewski et al., 2014), and empathetically
perceiving emotions in racial ingroup (but not outgroup)
faces (Xu et al., 2009), among other phenomena. They are
particularly worrying due to their recalcitrance to conscious
control or implicit bias training. We may be able to veto a
prejudiced thought (but see Kelly & Roedder, 2008), but we
cannot simply modify our perceptual experience at will.
Recalcitrant predictions such as this are “hyperpriors” and
are unamenable to rapid, conscious adjustment.
I begin with an overview of predictive processing. I
explain that the same principles that allow us to bootstrap
our way into full cognition also allow for biases to develop.
These biases include perceptual racial biases, which are
visual and affective rather than cognitive. I explain how
sampling biases in infancy and emotion perception
contribute to perceptual racial biases (although many other
factors certainly play a role). Finally, I hypothesize that
traditional implicit bias training may not be enough to
disentangle the web of hypotheses that contribute to
perceptual racial bias.