Background
There are few outcomes experienced by children receiving care in the Emergency Department (ED) that are amenable to measuring for the purposes of assessing of quality of care. The purpose of this study was to develop, test, and validate a new implicit review instrument that measures quality of care delivered to children in EDs.Methods
We developed a 7-point structured implicit review instrument that encompasses four aspects of care, including the physician's initial data gathering, integration of information and development of appropriate diagnoses; initial treatment plan and orders; and plan for disposition and follow-up. Two pediatric emergency medicine physicians applied the 5-item instrument to children presenting in the highest triage category to four rural EDs, and we assessed the reliability of the average summary scores (possible range of 5-35) across the two reviewers using standard measures. We also validated the instrument by comparing this mean summary score between those with and without medication errors (ascertained independently by two pharmacists) using a two-sample t-test.Results
We reviewed the medical records of 178 pediatric patients for the study. The mean and median summary score for this cohort of patients were 27.4 and 28.5, respectively. Internal consistency was high (Cronbach's alpha of 0.92 and 0.89). All items showed a significant (p < 0.005) positive correlation between reviewers using the Spearman rank correlation (range 0.24 to 0.39). Exact agreement on individual items between reviewers ranged from 70.2% to 85.4%. The Intra-class Correlation Coefficient for the mean of the total summary score across the two reviewers was 0.65. The validity of the instrument was supported by the finding of a higher score for children without medication errors compared to those with medication errors which trended toward significance (mean score = 28.5 vs. 26.0, p = 0.076).Conclusion
The instrument we developed to measure quality of care provided to children in the ED has high internal consistency, fair to good inter-rater reliability and inter-rater correlation, and high content validity. The validity of the instrument is supported by the fact that the instrument's average summary score was lower in the presence of medication errors, which trended towards statistical significance.