Number of Patient Encounters in Emergency Medicine Residency Does Not Correlate with In-Training Exam Domain Scores

Introduction Emergency medicine (EM) residents take the American Board of Emergency Medicine (ABEM) In-Training Examination (ITE) every year. This examination is based on the ABEM Model of Clinical Practice (Model). The purpose of this study was to determine whether a relationship exists between the number of patient encounters a resident sees within a specific clinical domain and their ITE performance on questions that are related to that domain. Methods Chief complaint data for each patient encounter was taken from the electronic health record for EM residents graduating in three consecutive years between 2016–2021. We excluded patient encounters without an assigned resident or a listed chief complaint. Chief complaints were then categorized into one of 20 domains based on the 2016 Model. We calculated correlations between the total number of encounters seen by a resident for all clinical years and their ITE performance for the corresponding clinical domain from their third year of training. Results Available for analysis were a total of 232,625 patient encounters and 69 eligible residents who treated the patients. We found no statistically significant correlations following Bonferroni correction for multiple analyses. Conclusion There was no correlation between the number of patient encounters a resident has within a clinical domain and their ITE performance on questions corresponding to that domain. This suggests the need for separate but parallel educational missions to achieve success in both the clinical environment and standardized testing.


INTRODUCTION
number of patient encounters during EM residency and ITE score. 3 However, it is unclear whether any relationship exists between the number of patient encounters a resident has within a specific clinical domain during training and their ITE performance on questions that correspond to that domain. Should no relationship exist, it could call into question the utility the ITE might have in measuring whether a resident is progressing appropriately with regard to their clinical skills.
Kolb's experiential learning theory would suggest that residents who have greater clinical exposure in a particular area (eg, cardiovascular complaints) should be able to better Kern et al.
Number of Patient Encounters in EM Residency Does Not Correlate with Exam Domain Scores conceptualize and achieve a greater understanding of clinical concepts than simply reading about them alone, provided that they engage in patient follow-up, self-reflection, and/or facilitated feedback with attending physicians. If experiential learning theory were to apply to health professions education, residents with increased experience should theoretically perform better on ITE questions corresponding to that domain, as this test is meant to be a surrogate for the knowledge required to competently practice EM. 5 Our purpose in this study was to determine whether there was a relationship between ITE performance within individual content domains of the Model and the number of patients seen during residency with chief complaints in each domain.

METHODS
This project was deemed exempt quality improvement by the University of Wisconsin Health Sciences Institutional Review Board.

Study Setting
We conducted the study at a three-year EM residency program situated within an urban, academic emergency department (ED) in the Midwest. The ED has 54 beds with a volume of approximately 60,000 patient visits annually. During the period of the study, the residency had 12 postgraduate year-1 positions available each year.

Data Acquisition
In this study we used deidentified, first chief complaint data rather than downstream categorization (eg, final diagnosis, admitting diagnosis). We used chief complaints to identify the nature of the patient encounter as this data was available at the time of patient presentation, likely dictated most of the ED evaluation, and would not have been affected by changes in treatment identified during later stages of a patient's hospital course. Residents were eligible for inclusion if they had graduated in three consecutive years between 2016-2021. All patient encounters from all years of training involving eligible EM residents were queried. To maintain anonymity, each resident was assigned a study identification number; the ID key was accessible only to the senior author, a member of the residency leadership team.
We excluded from analysis encounters where no chief complaint was listed or no resident was assigned. In cases where multiple residents were assigned to a single encounter, we designated the initial resident assigned to the encounter as the resident of record. This was done as the first resident is typically the most cognitively involved in determining the patient's diagnostic and treatment strategy. The chief complaint for each encounter was determined by the patient's primary nurse who cared for the patient in the ED initially, which is nearly always selected from a list of frequent chief complaints. Resident ITE scores across domains during the third year of training were taken from internal residency records.

Data Analysis
A previously published list of common EM chief complaints had been compiled and independently categorized into one of 20 content domains correlating with the 2016 ABEM Model of Clinical Practice by two boardcertified EM attending physicians. 4 For all chief complaints appearing in our data that were not previously categorized, we repeated the same categorization process with two board-certified EM attending physicians at our institution. In both cases, if there was disagreement between the two reviewers, a third board-certified emergency physician was brought in to adjudicate. We categorized complaints in which a symptom was used as the descriptor and could potentially correspond to multiple organ systems (eg, chest pain) into domains based on what was most likely given the general experiences of the coding physicians, rather than into the "Signs, Symptoms, and Presentations" domain.
The ITE scores are reported by ABEM by domain according to the Model. We calculated Pearson's correlation coefficient, Pearson's coefficient of determination, and Spearman's rank correlation along with 95% confidence intervals for each domain, comparing individual caseloads within each content area to the same individual's ITE subscore percentages within that domain using SPSS (IBM Corporation, Armonk, NY). The Bonferroni correction for multiple comparisons was used to determine significance.

RESULTS
We included in the analysis a total of 232,625 patient encounters from 69 residents in the analysis. Resident performance on the ITE is shown in Table 1. Correlation coefficients (Pearson's) ranged from -0.12 to 0.28 for the different domains. Correlation coefficients for each topic's clinical exposures and ITE scores, as well as their significance levels, are listed in Table 2. No significant correlations were identified after Bonferroni correction.

DISCUSSION
The number of patient encounters within a certain domain showed no correlation to resident performance on the corresponding ITE domains. This is in line with previous studies that have demonstrated little relation between total number of patient encounters during residency and performance on formal testing. 3 It has been demonstrated that differences exist between resident clinical exposure and the weight each domain is given on the ITE, 6 but our study further suggests that a disconnect exists between the breadth of clinical encounters and ITE performance. This would suggest that program leadership should limit the use of ITE scores as a global assessment tool for a resident's clinical progress and instead focus on those scores' ability to predict success on the QE.  It appears that success in clinical practice does not imply success on standardized testing. This would provide an argument to maintain parallel, separate educational missions focused on each mission, as success in the clinical environment and passing the QE are both critical components of an emergency physician's career after residency graduation. Requiring two separate missions would require a residency program to devote time to both, which could tax a program's faculty. Alternatively, this dual focus would require a program to potentially rely on commercial products to provide the specific knowledge to do well on the ITE. Access to online question banks (Qbank LLC, Stockholm, Sweden) has been demonstrated to be beneficial, 7 but their use may tax a residency's financial resources. While it is possible that the breadth (or lack) of clinical experience in certain areas would direct a resident's self-study practices, our study results suggest that this strategy may be suboptimal, at least as far as ITE study is concerned. Instead, residents would be best served with a broad study plan regardless of the range of their clinical encounters, which is in line with previous studies demonstrating the differences between residents' patient care experiences and the blueprint provided by the Model. 6,8 There remains room for further study to more clearly elucidate the link, if any, between clinical training and ITE performance.
Overall, our results appear to be in opposition to Kolb's experiential learning theory, which would have suggested a more robust link between clinical experience and testing performance. There may be multiple reasons for this discrepancy. First, experiential learning theory relies heavily on reflection to translate experience into knowledge. 5 On one hand, residents have multiple opportunities to reflect on cases during their clinical work, including documentation of the clinical encounter and feedback provided from faculty and other staff, as well as patient case logs as mandated by the Accreditation Coiuncil for Graduate Medical Education. 9 On the other hand, it is possible that the amount of reflection for each case is low, particularly during busy shifts where the demands of patient care may limit the amount of time for case review and feedback. Reflection on clinical experiences also requires the identification of experiences as learning opportunities, which is often reliant on faculty and peers, and may not be recognized by trainees. 10 Finally, there may be minimal to no dedicated time built into residency for residents to reflect; therefore, they must balance this against a busy schedule of other clinical and non-professional activities.
Another potential reason for the disconnect between actual clinical experiences and a corresponding ITE question is differences in medical content. It is possible that the topics discussed in the questions revolve around atypical presentations that are not seen frequently, if at all, during the span of a three-or four-year EM residency. If residents are not seeing certain pathologies (eg, scombroid poisoning) during their clinical shifts, then it would be unlikely that their clinical exposures would assist them on ITE questions. This does not imply that programs are not providing a comprehensive clinical experience to their residents, but rather that certain unavoidable gaps occur due to differences in communities served, geographical region, etc. For example, residents practicing in Wisconsin are unlikely to see a scorpion bite in their day-to-day clinical responsibilities, but this is identified as a critical topic in the Model. Therefore, program leaders should seek to identify areas in which potential clinical gaps exist and seek to devote extra time to these domains during their didactic conferences.
It is possible that ceiling effects are responsible for the overall lack of correlations we found and that residents who see a substantially lower number of patients in a particular domain would have lower ITE scores on that section. This may not have been captured by our data if the included residents did not fall below this threshold. However, programs perceiving a large deficit in clinical cases corresponding to a particular domain could review their own performance data to determine whether a significant deficit on their residents' ITE score reports exists within that domain.

LIMITATIONS
This study has several limitations. First, assessing case content by chief complaint could inappropriately categorize some presentations. For example, a patient presenting with a "behavior problem" (categorized under Psychobehavioral Disorders) could have anticholinergic toxicity because of an overdose (better categorized as a Toxicologic Disorder). While we considered using discharge or primary diagnosis instead of chief complaint to categorize our clinical exposures, we ultimately felt that this was inconsistent with the way EM is practiced. Additionally, some of the chief complaints of the encounters may have been categorized into the wrong domain due to errors on the part of the research team.
We used the 2016 Model of Clinical Practice, which informed the creation many of the ITEs administered during the years included in the study. Our study did not account for other factors that may have impacted a resident's performance on the ITE, such as differences in the type and usage of exam preparatory materials, although study resources made freely available by the program were the same throughout the study period (Rosh Review, Los Angeles, CA). Effort in the clinical setting also may not translate to success on the ITE, as the test offers no direct disincentives for poor performance, and any incentives for success are program specific. 11 Finally, this data was collected at a single site and, therefore, may be difficult to generalize to institutions with different clinical environments and test preparation resources.

CONCLUSION
We found no significant correlation between resident clinical exposure and performance on the ITE. This study supports the concept that standardized test performance is not linked to performance in other areas and suggests the need for the creation of separate, parallel educational missions to achieve success in both areas.