A central component of the predictive coding theoretical framework concerns the comparison between predictions and sensory decoding. In the probabilistic setting, this takes the form of assessing the similarity or distance between probability distributions. However, such similarity or distance measures are not associated with explicit probabilistic models, making their assumptions implicit. In this paper, we explore an original variation on probabilistic coherence variables; we define a probabilistic component, that we call a "Bayesian comparator", that mathematically yields a particular similarity measure. A geometrical analogy suggests two variants of this measure. We apply these similarity measures to simulate the comparison of known, predicted patterns to patterns from sensory decoding, first in a simple, illustrative model, and second, in a previous model of visual word recognition. Experimental results suggest that the variant that is scaled by the norms of both predicted and perceived probability distributions yields better robustness and more desirable dynamics.