Prediction, explanation, and control are basic cognitive abilities. Here we show how they can arise, simultaneously, from underlying mental models built during unstructured, exploration-based learning. Our experimental paradigm, involving interaction with a symbolic "chatbot", allows us to vary the relative difficulty of the tasks, and to measure how participants leverage the Bayesian evidence of their mental models for decision-making. Our experimental manipulation focuses on hidden information and task complexity. With full information, there are significant differences between the three tasks: for example, people are more sensitive to Bayesian evidence in prediction than in control or explanation. When information is hidden, however, performance equalizes. Taken together, our results suggest that, while specific heuristics may lead to different levels of performance in cases with full information, more fundamental forms of reasoning, based on an underlying mental model, and less sensitive to the specific task, come into play when pieces are missing.