Data sharing has become normative policy enforced by governments, funding agencies, journals, and other stakeholders. Reasons for data sharing include leveraging investments in research, reducing the need to collect new data, addressing new research questions by reusing or combining extant data, and reproducing research, which would lead to greater accountability, transparency, and less fraud. Much of the scholarship on data practices attempts to understand the sociotechnical barriers to sharing, with goals to design infrastructures, policies, and cultural interventions that will overcome these barriers. Yet data sharing and reuse are common practice in only a few fields. Astronomy and genomics in the sciences, survey research in the social sciences, and archaeology in the humanities are the typical exemplars, and remain the exceptions rather than the rule. The lack of success of data sharing policies, despite accelerating enforcement over the last decade, indicates the need not just for a much deeper understanding of the roles of data in contemporary science, but also for developing new models of scientific practice. This presentation will report on research in progress, funded by the Alfred P. Sloan Foundation, to examine three factors that appear to influence data practices across domains: How does the mix of domain expertise influence the collection, use, and reuse of data and vice versa? What factors of scale – such as data, discipline, distribution, and duration – influence research practices, and how? How does the centralization or decentralization of data collection influence use, reuse, curation, and project strategy, and vice versa? Context for this talk is drawn from the presenter’s recent book, Big Data, Little Data, noData: Scholarship in the Networked World (MIT Press, 2015).