As AI systems come to permeate human society, there is an increasing need for such systems to explain their actions,conclusions, or decisions. This is presently fuelling a surge in interest in machine-generated explanation. However,there are not only technical challenges to be met here; there is also considerable uncertainty about what suitable targetexplanations should look like. In this paper, we describe a case study which makes a start at bridging between machinereasoning, and the philosophical and psychological literatures on what counts as good reasoning by eliciting explanationsfrom human experts. The work illustrates how concrete cases rapidly move discussion beyond abstract considerationsof explanatory virtues toward specific targets more suitable for emulation by machines. On the one hand, it highlightsthe limitations of present algorithms for generating explanations from Bayesian networks. At the same time, however, itprovides concrete direction for future algorithm construction.