Acceptability ratings cannot be taken at face value
Published Web Location
http://ling.auf.net/lingbuzz/004862Abstract
This chapter addresses how linguists’ empirical (syntactic) claims should be tested with non-linguists. Recent experimental work attempts to measure rates of convergence between data presented in journal articles and the results of large surveys. Three follow-up experiments to one such study are presented. It is argued that the original method may underestimate the true rate of convergence because it leaves considerable room for naïve subjects to give ratings that do not reflect their true acceptability judgments of the relevant structures. To understand what can go wrong, the experiments were conducted in two parts. The first part had visually presented sentences rated on a computer, replicating previous work. The second part was an interview where the experimenter asked the participants about the ratings they gave to particular items, in order to determine what interpretation or parse they had assigned, whether they had missed any critical words, and so on.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.