Models of spatial reasoning often assume distinct visual and
spatial representations. In particular, the visual impedance effect
– slower response time when more visual details are represented
in three-term series spatial reasoning tasks – has been
taken as evidence for the distinctive roles of visual and spatial
representations. In this paper, we show that a memory
model of spreading activation based on the ACT-R architecture
can explain the visual impedance effect without the assumption
of distinct visual and spatial representations. Using
the same memory representation, varying levels of visual features
associated with an object are represented in the model.
The visual impedance effect is explained by the spreading activation
mechanism of ACT-R. The model not only provides
a more parsimonious explanation to the visual impedance effect,
but also leads to testable predictions of a wide range of
memory effects in spatial reasoning.