The recognition accuracy of current discriminant architectures for visual recognition is hampered by the dependence on holistic image representations, where images are represented as vectors in a high-dimensional space. Such representations lead to complex classification problems due to the need to 1) restrict image resolution and 2) model complex manifolds due to variations in pose, lighting, and other imaging variables. Localized representations, where images are represented as bags of low-dimensional vectors, are significantly less affected by these problems but have traditionally been difficult to combine with discriminant classifiers such as the support vector machine (SVM). This limitation has recently been lifted by the introduction of probabilistic SVM kernels, such as the Kullback-Leibler (KL) kernel. In this work we investigate the advantages of using this kernel as a means to combine discriminant recognition with localized representations. We derive a taxonomy of kernels based on the combination of the KL-kernel with various probabilistic representation previously proposed in the recognition literature. Experimental evaluation shows that these kernels can significantly outperform traditional SVM solutions for recognition.