Clinical reasoning is a fundamental skill required of all physicians. Direct observation is onemethod medical schools use to assess clinical reasoning, where faculty observers rate students
based on the student’s interaction with a patient. Variability in how individual faculty members
define clinical reasoning, however, can reduce assessment reliability. Understanding how faculty
make assessment decisions of student clinical reasoning can improve the reliability and validity
of medical school’s assessments. Fourteen UC Davis School of Medicine faculty members
completed think-aloud interviews while watching a medical student encounter with a
standardized patient. Faculty members were asked to assess the student’s clinical reasoning
ability and were not provided any information about the student or the case other than a door
note. The faculty were then asked to provide written summative feedback to the student. The
think-aloud interviews were video-recorded, transcribed, and analyzed using thematic analysis.
The analysis provided five themes about how faculty members assess medical students: student
factors, situational factors, assessor factors, integration, and judgment. Additional findings about
the ways in which faculty provide students narrative feedback were also noted. The themes
together create a model of faculty reasoning, the process by which faculty make assessment
decisions about a medical student’s clinical reasoning ability. Faculty assessment decisions are
influenced by a number of different factors. The ways in which they process information about
the student and the encounter, and then integrate it with their own existing knowledge and
experience, is unique to the individual. Understanding this process allows for opportunities to
influence factors to improve consistency, and therefore validity.