People's mental representations of complex stimuli, such as images of facial affect, are difficult to elicit. To address this challenge, methods such as Markov Chain Monte Carlo with People (MCMCP), integrate human agents into computer-based sampling algorithms. However, such methods suffer from slow convergence, making them impractical for recovering the representations of individuals. Here, we extended MCMCP by introducing an adapted Variational Auto-Encoder (VAE) with domain knowledge as an auxiliary agent, guiding the sampling process away from less useful experimental trials. To test this approach, we ran a new experiment comparing such a VAE-guided MCMCP against baseline MCMCP in terms of convergence speed and quality of recovering human representations of facial affect. Preliminary results demonstrated that most guided chains converged on an individual's facial affect representation within a single experimental session, faster than the baseline methods, and results showed the extent of individual differences in facial affect representations. Thus, VAE-guided MCMCP provides a promising framework for interfacing machine intelligence with psychological experiments to enhance our understanding of human cognition.