Optimal stopping in a natural sampling task
Skip to main content
eScholarship
Open Access Publications from the University of California

Optimal stopping in a natural sampling task

Abstract

Sampling biases are often assumed to arise from the type of information that learners sample (Fiedler, 2008), the possibility of negative payoffs (Denrell, 2001), or the prevalence of small samples (Kareev et al., 2002). Here, we show that even in a natural sampling situation (repeated Bernoulli trials), in which a learner’s only decision is when to stop sampling, different sampling goals can have an impact on sample composition and on inferences drawn from them. Specifically, we find that learners sampling with a binary goal (”more heads/tails?”) versus a distributional goal (”how many heads?”) end up with samples that differ not only in size but also content. Binary sampling leads to more samples with extreme distributions (many more heads or tails) compared to distributional sampling. In this project, we explore the impact of those sampling goals on subsequent decision-making on the basis of those samples.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View