Various methods for social learning have been proposed within the reinforcement learning framework. These methods involve the social transmission of information within specific representational formats like policies, value, or world models. However, transmission of higher-level, model-based representations typically require costly inference (i.e., mentalizing) to ``unpack'' observable actions into putative mental states (e.g., with inverse reinforcement learning). Here, we investigate cheaper, non-mentalizing alternatives to social transmission of model-based representations that bias the statistics of experience to ``hijack'' asocial mechanisms for learning of environments. We simulate a spatial foraging task where a naïve learner learns alone or through observing a pre-trained expert. We test model-free vs. model-based learning together with simple non-mentalizing social learning strategies. Through analysis of generalization when the expert can no longer be observed and through correspondence between expert and learner representations, we show how simple social learning mechanisms can give rise to complex forms of cultural transmission.