In light of constraints inherent to empirical research, such as finite time and resources, there has been growing interest in using artificial intelligence to streamline the scientific process. However, despite advancements in automating scientific discovery, the implementation of strategies for sampling useful experiments remains a challenge. This metascientific study evaluates different experimental sampling strategies based on their effectiveness in advancing the discovery of linear models of human cognition based on synthetic data. We investigate the hypothesis put forth by Dubova et al. (2022) that random sampling of experiments is more effective than model-driven sampling. Indeed, the results of this study indicate that random sampling is more effective in a majority of cases, and that the underperformance of model-driven strategies can be attributed to a narrow sampling of the design space. Despite limitations in our approach, the work presented offers a novel framework for the metascientific study of autonomous empirical research.