In this study, we investigated human causal learning in a continuous time and space setting. Our experiments revealed that people are capable causal learners in such contexts, and that standard Bayesian updating of prior beliefs partially explains how priors impact their judgments. Specifically, individuals with strong prior beliefs consistent with the ground truth are less likely to misinterpret indirect effects as direct in structures like causal chains. When priors were inconsistent with the ground truth however, participants performed worse on average, choosing less informative interventions. Computational modelling revealed that to deal with the abundance of data in this setting, participants use interventions to mark informative data and focus on direct outgoing links from the variable they intervene on. This task-decomposition strategy, when paired with participants' interventions, achieves comparable accuracy to using the entirety of the dataset, despite ignoring almost half of the available data. These findings are in line with the resource rational framework, where discarding data outside of interventions saves computational costs and the full inference problem about the graph is broken down into sub-problems of inferring individual links between variables. Overall, our study reinforces the idea that humans are frugal and intuitive active learners who combine actions and inference to optimize learning while minimizing effort.