Theories of intrinsic motivation describe how behavior is driven by inherent satisfaction beyond only rewarding outcomes. One computational theory quantifies “fun” as the pleasure derived from improving one’s model of the environment. Here, we refine and test this theory by predicting maximal fun occurs when learning progress is also maximal, corresponding to a balance between ability (or knowledge) and task difficulty. Across multiple natural data sets (e.g., “Super Mario Maker”, “Trackmania”, or “Robozzle”), we confirm our prediction that human judgments of fun are highest at intermediate levels of difficulty. We provide further evidence through a number guessing experiment, where we manipulated the learnability of the environment by controlling the variance of the numbers that could be guessed. Both participants’ engagement and model-based analyses confirmed our predictions. Beyond simply exploiting maximal rewards or exploring maximal uncertainty, the constraints of learnability require a balance between challenge and ease.