Skip to main content
eScholarship
Open Access Publications from the University of California

Volume 19, Issue 2, 2018

Neuroscience

Implementation of a Rapid, Protocol-based TIA Management Pathway

Introduction: Our goal was to assess whether use of a standardized clinical protocol improves efficiency for patients who present to the emergency department (ED) with symptoms of transient ischemic attack (TIA).

Methods: We performed a structured, retrospective, cohort study at a large, urban, tertiary care academic center. In July 2012 this hospital implemented a standardized protocol for patients with suspected TIA. The protocol selected high-risk patients for admission and low/intermediate-risk patients to an ED observation unit for workup. Recommended workup included brain imaging, vascular imaging, cardiac monitoring, and observation. Patients were included if clinical providers determined the need for workup for TIA. We included consecutive patients presenting during a six-month period prior to protocol implementation, and those presenting between 6-12 months after implementation. Outcomes included ED length of stay (LOS), hospital LOS, use of neuroimaging, and 90-day risk of stroke or TIA.

Results: From 01/2012 to 06/2012, 130 patients were evaluated for TIA symptoms in the ED, and from 01/2013 to 06/2013, 150 patients. The final diagnosis was TIA or stroke in 45% before vs. 41% after (p=0.18). Following the intervention, the inpatient admission rate decreased from 62% to 24% (p<0.001), median ED LOS decreased by 1.2 hours (5.7 to 4.9 hours, p=0.027), and median total hospital LOS from 29.4 hours to 23.1 hours (p=0.019). The proportion of patients receiving head computed tomography (CT) went from 68% to 58% (p=0.087); brain magnetic resonance (MR) imaging from 83% to 88%, (p=0.44) neck CT angiography from 32% to 22% (p=0.039); and neck MR angiography from 61% to 72% (p=0.046). Ninety-day stroke or recurrent TIA among those with final diagnosis of TIA was 3% for both periods.

Conclusion: Implementation of a TIA protocol significantly reduced ED LOS and total hospital LOS.

Trauma

Prehospital Lactate Predicts Need for Resuscitative Care in Non-hypotensive Trauma Patients

Introduction: The prehospital decision of whether to triage a patient to a trauma center can be difficult. Traditional decision rules are based heavily on vital sign abnormalities, which are insensitive in predicting severe injury. Prehospital lactate (PLac) measurement could better inform the triage decision. PLac’s predictive value has previously been demonstrated in hypotensive trauma patients but not in a broader population of normotensive trauma patients transported by an advanced life support (ALS) unit.

Methods: This was a secondary analysis from a prospective cohort study of all trauma patients transported by ALS units over a 14-month period. We included patients who received intravenous access and were transported to a Level I trauma center. Patients with a prehospital systolic blood pressure ≤ 100 mmHg were excluded. We measured PLac’s ability to predict the need for resuscitative care (RC) and compared it to that of the shock index (SI). The need for RC was defined as either death in the emergency department (ED), disposition to surgical intervention within six hours of ED arrival, or receipt of five units of blood within six hours. We calculated the risk associated with categories of PLac.

Results: Among 314 normotensive trauma patients, the area under the receiver operator characteristic curve for PLac predicting need for RC was 0.716, which did not differ from that for SI (0.631) (p=0.125). PLac ≥ 2.5 mmol/L had a sensitivity of 74.6% and a specificity of 53.4%. The odds ratio for need for RC associated with a 1-mmol/L increase in PLac was 1.29 (95% confidence interval [CI] [0.40 – 4.12]) for PLac < 2.5 mmol/L; 2.27 (1.10 – 4.68) for PLac from 2.5 to 4.0 mmol/L; and 1.26 (1.05 – 1.50) for PLac ≥ 4 mmol/L.

Conclusion: PLac was predictive of need for RC among normotensive trauma patients. It was no more predictive than SI, but it has certain advantages and disadvantages compared to SI and could still be useful. Prospective validation of existing triage decision rules augmented by PLac should be investigated.

 

Societal Influence on Emergency Department Care

Emergency Department Experience with Novel Electronic Medical Record Order for Referral to Food Resources

Introduction: Food insecurity is a significant issue in the United States and is prevalent in emergency department (ED) patients. The purpose of this study was to report the novel use of an integrated electronic medical record (EMR) order for food resources, and to describe our initial institutional referral patterns after focused education and implementation of the order.

Methods: This was a retrospective, observational study, describing food-bank referral patterns before and after the implementation of dedicated ED education on the novel EMR order for food resources.

Results: In 2015, prior to formal education a total of 1,003 referrals were made to the regional food bank, Second Harvest Heartland. Five referrals were made from the ED. In 2016, after the educational interventions regarding the referral, there were 1,519 referrals hospital-wide, and 55 referrals were made from the ED. Of the 1,519 referrals 1,129 (74%) were successfully contacted by Second Harvest Heartland, and 954 (63%) accepted and received assistance. 

Conclusion: Use of the EMR as a tool to refer patients to partner organizations for food resources is plausible and may result in an increase in ED referrals for food resources. Appropriate education is crucial for application of this novel ED process.

Emergency Department Administration

Case Management Reduces Length of Stay, Charges, and Testing in Emergency Department Frequent Users

Introduction: Case management is an effective, short-term means to reduce emergency department (ED) visits in frequent users of the ED. This study sought to determine the effectiveness of case management on frequent ED users, in terms of reducing ED and hospital length of stay (LOS), accrued costs, and utilization of diagnostic tests.

Methods: The study consisted of a retrospective chart review of ED and inpatient visits in our hospital’s ED case management program, comparing patient visits made in the one year prior to enrollment in the program, to the visits made in the one year after enrollment in the program. We examined the LOS, use of diagnostic testing, and monetary charges incurred by these patients one year prior and one year after enrollment into case management. 

Results: The study consisted of 158 patients in case management. Comparing the one year prior to enrollment to the one year after enrollment, ED visits decreased by 49%, inpatient admissions decreased by 39%, the use of computed tomography imaging decreased 41%, the use of ultrasound imaging decreased 52%, and the use of radiographs decreased 38%. LOS in the ED and for inpatient admissions decreased by 39%, reducing total LOS for these patients by 178 days. ED and hospital charges incurred by these patients decreased by 5.8 million dollars, a 41% reduction. All differences were statistically significant.

Conclusion: Case management for frequent users of the ED is an effective method to reduce patient visits, the use of diagnostic testing, length of stay, and cost within our institution.

Emergency Department Operations

Transition of Care from the Emergency Department to the Outpatient Setting: A Mixed-Methods Analysis

Introduction: The goal of this study was to characterize current practices in the transition of care between the emergency department and primary care setting, with an emphasis on the use of the electronic medical record (EMR). 

Methods: Using literature review and modified Delphi technique, we created and tested a pilot survey to evaluate for face and content validity. The final survey was then administered face-to-face at eight different clinical sites across the country. A total of 52 emergency physicians (EP) and 49 primary care physicians (PCP) were surveyed and analyzed. We performed quantitative analysis using chi-square test. Two independent coders performed a qualitative analysis, classifying answers by pre-defined themes (inter-rater reliability > 80%). Participants’ answers could cross several pre-defined themes within a given question. 

Results: EPs were more likely to prefer telephone communication compared with PCPs (30/52 [57.7%] vs. 3/49 [6.1%] P < 0.0001), whereas PCPs were more likely to prefer using the EMR for discharge communication compared with EPs (33/49 [67.4%] vs. 13/52 [25%] p < 0.0001). EPs were more likely to report not needing to communicate with a PCP when a patient had a benign condition (23/52 [44.2%] vs. 2/49 [4.1%] p < 0.0001), but were more likely to communicate if the patient required urgent follow-up prior to discharge from the ED (33/52 [63.5%] vs. 20/49 [40.8%] p = 0.029). When discussing barriers to effective communication, 51/98 (52%) stated communication logistics, followed by 49/98 (50%) who reported setting/environmental constraints and 32/98 (32%) who stated EMR access was a significant barrier.

Conclusion: Significant differences exist between EPs and PCPs in the transition of care process. EPs preferred telephone contact synchronous to the encounter whereas PCPs preferred using the EMR asynchronous to the encounter. Providers believe EP-to-PCP contact is important for improving patient care, but report varied expectations and multiple barriers to effective communication. This study highlights the need to optimize technology for an effective transition of care from the ED to the outpatient setting.

  • 1 supplemental file

A Novel Approach to Addressing an Unintended Consequence of Direct to Room: The Delay of Initial Vital Signs

Introduction: The concept of “direct to room” (DTR) and “immediate bedding” has been described in the literature as a mechanism to improve front-end, emergency department (ED) processing. The process allows for an expedited clinician-patient encounter. An unintended consequence of DTR was a time delay in obtaining the initial set of vital signs upon patient arrival.  

Methods: This retrospective cohort study was conducted at a single, academic, tertiary-care facility with an annual census of 94,000 patient visits. Inclusion criteria were all patients who entered the ED from 11/1/15 to 5/1/16 and between the hours of 7 am to 11 pm. During the implementation period, a vital signs station was created and a personal care assistant was assigned to the waiting area with the designated job of obtaining vital signs on all patients upon arrival to the ED and prior to leaving the waiting area. Time to first vital sign documented (TTVS) was defined as the time from quick registration to first vital sign documented.

Results: The pre-implementation period, mean TTVS was 15.3 minutes (N= 37,900). The post-implementation period, mean TTVS was 9.8 minutes (N= 39,392). The implementation yielded a 35% decrease and an absolute reduction in the average TTVS of 5.5 minutes (p<0.0001). 

Conclusion: This study demonstrated that the coupling of registration and a vital signs station was successful at overcoming delays in obtaining the time to initial vital signs.

 

Emergency Medical Services

Variations in Cardiac Arrest Regionalization in California

Introduction: The development of cardiac arrest centers and regionalization of systems of care may improve survival of patients with out-of-hospital cardiac arrest (OHCA). This survey of the local EMS agencies (LEMSA) in California was intended to determine current practices regarding the treatment and routing of OHCA patients and the extent to which EMS systems have regionalized OHCA care across California. 

Methods: We surveyed all of the 33 LEMSA in California regarding the treatment and routing of OHCA patients according to the current recommendations for OHCA management. 

Results: Two counties, representing 29% of the California population, have formally regionalized cardiac arrest care. Twenty of the remaining LEMSA have specific regionalization protocols to direct all OHCA patients with return of spontaneous circulation to designated percutaneous coronary intervention (PCI)-capable hospitals, representing another 36% of the population. There is large variation in LEMSA ability to influence inhospital care. Only 14 agencies (36%), representing 44% of the population, have access to hospital outcome data, including survival to hospital discharge and cerebral performance category scores. 

Conclusion: Regionalized care of OHCA is established in two of 33 California LEMSA, providing access to approximately one-third of California residents. Many other LEMSA direct OHCA patients to PCI-capable hospitals for primary PCI and targeted temperature management, but there is limited regional coordination and system quality improvement. Only one-third of LEMSA have access to hospital data for patient outcomes.

  • 1 supplemental file

Outcomes of Emergency Medical Service Usage in Severe Road Traffic Injury during Thai Holidays

Introduction: Thailand has the highest mortality from road traffic injury (RTI) in the world. There are usually higher incident rates of RTI in Thailand over long holidays such as New Year and Songkran. To our knowledge, there have been no studies that describe the impact of emergency medical service (EMS) utilization by RTI patients in Thailand. We sought to determine the outcomes of EMS utilization in severe RTIs during the holidays.

Methods: We conducted a retrospective review study by using a nationwide registry that collected RTI data from all hospitals in Thailand during the New Year holidays in 2008–2015 and Songkran holidays in 2008–2014. A severe RTI patient was defined as one who was admitted, transferred to another hospital, or who died at the emergency department (ED) or during referral. We excluded patients who died at the scene, those who were not transported to the ED, and those who were discharged from the ED. Outcomes associated with EMS utilization were identified by using multiple logistic regression and adjusted by using factors related to injury severity.

Results: Overall we included 100,905 patients in the final analysis; 39,761 severe RTI patients (39.40%; 95% confidence interval [CI] 95% CI [39.10%–39.71%]) used EMS transportation to hospitals. Severe RTI patients transported by EMS had a significantly higher mortality rate in the ED and during referral than that those who were not (2.00% vs. 0.78%, p < 0.001). Moreover, EMSuse was significantly associated with increased mortality rate in the first 24 hours of admission to hospitals (1.38% for EMS use vs. 0.57% for no EMS use, p < 0.001). EMS utilization was a significant predictor of mortality in EDs and during referral (adjusted odds ratio [OR] 2.19; 95% CI [1.88–2.55]), and mortality in the first 24 hours of admission (adjusted OR 2.31; 95% CI [1.95–2.73]).

Conclusion: In this cohort, severe RTI patients transported by EMS had a significantly higher mortality rate than those who went to hospitals using private vehicles during these holidays.

Endemic Infections

A Predictive Model Facilitates Early Recognition of Spinal Epidural Abscess in Adults

Introduction: Spinal epidural abscess (SEA), a highly morbid and potentially lethal deep tissue infection of the central nervous system has more than tripled in incidence over the past decade. Early recognition at the point of initial clinical presentation may prevent irreversible neurologic injury or other serious, adverse outcomes. To facilitate early recognition of SEA, we developed a predictive scoring model.

Methods: Using data from a 10-year, retrospective, case-control study of adults presenting for care at a tertiary-care, regional, academic medical center, we used the Integrated Discrimination Improvement Index (IDI) to identify candidate discriminators and created a multivariable logistic regression model, refined based on p-value significance. We selected a cutpoint that optimized sensitivity and specificity. 

Results: The final multivariable logistic regression model based on five characteristics –patient age, fever and/or rigor, antimicrobial use within 30 days, back/neck pain, and injection drug use – shows excellent discrimination (AUC 0.88 [95% confidence interval 0.84, 0.92]). We used the model’s β coefficients to develop a scoring system in which a cutpoint of six correctly identifies cases 89% of the time. Bootstrapped validation measures suggest this model will perform well across samples drawn from this population.

Conclusion: Our predictive scoring model appears to reliably discriminate patients who require emergent spinal imaging upon clinical presentation to rule out SEA and should be used in conjunction with clinical judgment.

High Prevalence of Sterile Pyuria in the Setting of Sexually Transmitted Infection in Women Presenting to an Emergency Department

Introduction: The clinical presentations for sexually transmitted infections (STI) and urinary tract infections (UTI) often overlap, and symptoms of dysuria and urinary frequency/urgency occur with both STIs and UTIs. Abnormal urinalysis (UA) findings and pyuria are common in both UTIs and STIs, and confirmatory urine cultures are not available to emergency clinicians to aid in decision-making regarding prescribing antibiotics for UTIs. The objective of this study was to determine the frequency of sterile pyuria in women with confirmed STIs, as well as whether the absolute number of leukocytes on microscopy or nitrite on urine dipstick correlated with positive urine cultures in patients with confirmed STIs. We also sought to determine how many patients with STIs were inappropriately prescribed a UTI antibiotic. 

Methods: We performed a retrospective chart review of patients aged 18-50 who had a urinalysis and pelvic examination in the emergency department (including cervical cultures), and tested positive for Neisseria gonorrhoeae, Chlamydia trachomatis, and/or Trichomonas vaginalis. Descriptive statistics were obtained for all variables, and associations between various findings were sought using the Fisher’s exact test for categorical variables. We calculated comparison of proportions using the N-1 chi-squared analysis. 

Results: A total of 1,052 female patients tested positive for Neisseria gonorrhoeae, Chlamydia trachomatis, and/or Trichomonas vaginalis and were entered into the database. The prevalence of pyuria in all cases was 394/1,052, 37% (95% confidence interval [CI] [0.34-0.40]). Of the cases with pyuria, 293/394, 74% (95% CI [0.70-0.78]) had sterile pyuria with negative urine cultures. The prevalence of positive urine cultures in our study population was 101/1,052, 9.6% (95% CI [0.08-0.11]). Culture positive urines had a mean of 34 leukocytes per high-power field, and culture negative urines had a mean of 24 leukocytes per high-power field, with a difference of 10, (95% CI [3.46-16.15]), which was statistically significant (p=0.003). Only 123 cases tested positive for nitrite on the urinalysis dipstick; 50/123, 41% (95% CI [0.32-0.49]) had positive urine cultures, and 73/123, 59% (95% CI [0.51-0.68]) had negative urine cultures. Nitrite-positive urines were actually 18% more likely to be associated with negative urine cultures in the setting of positive STI cases, (95% CI [4.95-30.42], p=0.0048). Antibiotics were prescribed for 295 patients with suspected UTI. Of these, 195/295, 66% (95% CI [0.61-0.71]) had negative urine cultures, and 100/295, 34% (0.33, 95% CI [0.28-0.39]) had positive urine cultures. Chi-square analysis yielded a difference of these proportions of 32% (95% CI [23.92-39.62], p<0.0001).

Conclusion: This study demonstrated that in female patients with STIs who have pyuria, there is a high prevalence of sterile pyuria. Our results suggest that reliance on pyuria or positive nitrite for the decision to add antimicrobial therapy empirically for a presumed urinary tract infection in cases in which an STI is confirmed or highly suspected is likely to result in substantial over-treatment.

Disaster Preparedness

Baby Shampoo to Relieve the Discomfort of Tear Gas and Pepper Spray Exposure: A Randomized Controlled Trial

Introduction: Oleoresin capsicum (OC) or pepper spray, and tear gas (CS) are used by police and the military and produce severe discomfort. Some have proposed that washing with baby shampoo helps reduce this discomfort. Methods: We conducted a prospective, randomized, controlled study to determine if baby shampoo is effective in reducing the severity and duration of these effects. Study subjects included volunteers undergoing OC or CS exposure as part of their police or military training. After standardized exposure to OC or CS all subjects were allowed to irrigate their eyes and skin ad lib with water. Those randomized to the intervention group were provided with baby shampoo for application to their head, neck, and face. Participants rated their subjective discomfort in two domains on a scale of 0-10 at 0, 3, 5, 10, and 15 minutes. We performed statistical analysis using a two-tailed Mann-Whitney Test.Results: There were 58 participants. Of 40 subjects in the OC arm of the study, there were no significant differences in the ocular or respiratory discomfort at any of the time points between control (n=19) and intervention (n=21) groups. Of 18 subjects in the CS arm, there were no significant differences in the ocular or skin discomfort at any of the time points between control (n=8) and intervention (n=10) groups.Conclusion: Irrigation with water and baby shampoo provides no better relief from OC- or CS-induced discomfort than irrigation with water alone.

Health Outcomes

Inpatient Trauma Mortality after Implementation of the Affordable Care Act in Illinois

Introduction: Illinois hospitals have experienced a marked decrease in the number of uninsured patients after implementation of the Affordable Care Act (ACA). However, the full impact of health insurance expansion on trauma mortality is still unknown. The objective of this study was to determine the impact of ACA insurance expansion on trauma patients hospitalized in Illinois.

Methods: We performed a retrospective cohort study of 87,001 trauma inpatients from third quarter 2010 through second quarter 2015, which spans the implementation of the ACA in Illinois. We examined the effects of insurance expansion on trauma mortality using multivariable Poisson regression.

Results: There was no significant difference in mortality comparing the post-ACA period to the pre-ACA period incident rate ratio (IRR)=1.05 (95% confidence interval [CI] [0.93-1.17]). However, mortality was significantly higher among the uninsured in the post-ACA period when compared with the pre-ACA uninsured population IRR=1.46 (95% CI [1.14-1.88]).

Conclusion: While the ACA has reduced the number of uninsured trauma patients in Illinois, we found no significant decrease in inpatient trauma mortality. However, the group that remains uninsured after ACA implementation appears to be particularly vulnerable. This group should be studied in order to reduce disparate outcomes after trauma.

  • 1 supplemental file

Emergency Department (ED), ED Observation, Day Hospital, and Hospital Admissions for Adults with Sickle Cell Disease

Introduction: Use of alternative venues to manage uncomplicated vaso-occlusive crisis (VOC), such as a day hospital (DH) or ED observation unit, for patients with sickle cell anemia, may significantly reduce admission rates, which may subsequently reduce 30-day readmission rates.

Methods: In the context of a two-institution quality improvement project to implement best practices for management of patients with sickle cell disease (SCD) VOC, we prospectively compared acute care encounters for utilization of 1) emergency department (ED); 2) ED observation unit; 3) DH, and 4) hospital admission, of two different patient cohorts with SCD presenting to our two study sites. Using a representative sample of patients from each institution, we also tabulated SCD patient visits or admissions to outside hospitals within 20 miles of the patients’ home institutions. 

Results: Over 30 months 427 patients (297 at Site 1 and 130 at Site 2) initiated 4,740 institutional visits, totaling 6,627 different acute care encounters, including combinations of encounters. The range of encounters varied from a low of 0 (203 of 500 patients [40.6%] at Site 1; 65 of 195 patients [33.3%] at Site 2), and a high of 152 (5/month) acute care encounters for one patient at Site 2. Patients at Site 2 were more likely to be admitted to the hospital during the study period (88.4% vs. 74.4%, p=0.0011) and have an ED visit (96.9% vs. 85.5%, p=0.0002). DH was used more frequently at Site 1 (1.207 encounters for 297 patients at Site 1, vs. 199 encounters for 130 patients at Site 2), and ED observation was used at Site 1 only. Thirty-five percent of patients visited hospitals outside their home academic center. 

Conclusion: In this 30-month assessment of two sickle cell cohorts, healthcare utilization varied dramatically between individual patients. One cohort had more hospital admissions and ED encounters, while the other cohort had more day hospital encounters and used a sickle cell disease observation VOC protocol. One-third of patients sampled visited hospitals for acute care outside of their care providers’ institutions.

Reduced Computed Tomography Use in the Emergency Department Evaluation of Headache Was Not Followed by Increased Death or Missed Diagnosis

Introduction: This study investigated whether a 9.6% decrease in the use of head computed tomography (HCT) for patients presenting to the emergency department (ED) with a chief complaint of headache was followed by an increase in proportions of death or missed intracranial diagnosis during the 22.5-month period following each index ED visit.

Methods: We reviewed the electronic medical records of all patients sampled during a quality improvement effort in which the aforementioned decrease in HCT use had been observed. We reviewed notes from the ED, neurology, neurosurgery, and primary care services, as well as all brain imaging results to determine if death occurred or if an intracranial condition was discovered in the 22.5 months after each index ED visit. An independent, blinded reviewer reviewed each case where an intracranial condition was diagnosed after ED discharge to determine whether the condition was reasonably likely to have been related to the index ED visit’s presentation, thereby representing a missed diagnosis.

Results: Of the 582 separate index ED visits sampled, we observed a total of nine deaths and 10 missed intracranial diagnoses. There was no difference in the proportion of death (p = 0.337) or missed intracranial diagnosis (p = 0.312) observed after a 9.6% reduction in HCT use. Among patients who subsequently had visits for headache or brain imaging, we found that these patients were significantly more likely to have not had a HCT done during the index ED visit (59.2% vs. 49.6% (p = 0.031) and 37.1% vs. 26% (p = 0.006), respectively).

Conclusion: Our study adds to the compelling evidence that there is opportunity to safely decrease CT imaging for ED patients. To determine the cost effectiveness of such reductions further research is needed to measure what patients and their healthcare providers do after discharge from the ED when unnecessary testing is withheld.

Education

Educator Toolkits on Second Victim Syndrome, Mindfulness and Meditation, and Positive Psychology: The 2017 Resident Wellness Consensus Summit

Introduction: Burnout, depression, and suicidality among residents of all specialties have become a critical focus of attention for the medical education community. 

Methods: As part of the 2017 Resident Wellness Consensus Summit in Las Vegas, Nevada, resident participants from 31 programs collaborated in the Educator Toolkit workgroup. Over a seven-month period leading up to the summit, this workgroup convened virtually in the Wellness Think Tank, an online resident community, to perform a literature review and draft curricular plans on three core wellness topics. These topics were second victim syndrome, mindfulness and meditation, and positive psychology. At the live summit event, the workgroup expanded to include residents outside the Wellness Think Tank to obtain a broader consensus of the evidence-based toolkits for these three topics.

Results: Three educator toolkits were developed. The second victim syndrome toolkit has four modules, each with a pre-reading material and a leader (educator) guide. In the mindfulness and meditation toolkit, there are three modules with a leader guide in addition to a longitudinal, guided meditation plan. The positive psychology toolkit has two modules, each with a leader guide and a PowerPoint slide set. These toolkits provide educators the necessary resources, reading materials, and lesson plans to implement didactic sessions in their residency curriculum. 

Conclusion: Residents from across the world collaborated and convened to reach a consensus on high-yield—and potentially high-impact—lesson plans that programs can use to promote and improve resident wellness. These lesson plans may stand alone or be incorporated into a larger wellness curriculum.

  • 3 supplemental ZIPs

Executive Summary from the 2017 Emergency Medicine Resident Wellness Consensus Summit

Introduction: Physician wellness has recently become a popular topic of conversation and publication within the house of medicine and specifically within emergency medicine (EM). Through a joint collaboration involving Academic Life in Emergency Medicine’s (ALiEM) Wellness Think Tank, Essentials of Emergency Medicine (EEM), and the Emergency Medicine Residents’ Association (EMRA), a one-day Resident Wellness Consensus Summit (RWCS) was organized.

Methods: The RWCS was held on May 15, 2017, as a pre-day event prior to the 2017 EEM conference in Las Vegas, Nevada. Seven months before the RWCS event, pre-work began in the ALiEM Wellness Think Tank, which was launched in October 2016. The Wellness Think Tank is a virtual community of practice involving EM residents from the U.S. and Canada, hosted on the Slack digital-messaging platform. A working group was formed for each of the four predetermined themes: wellness curriculum development; educator toolkit resources for specific wellness topics; programmatic innovations; and wellness-targeted technologies. 

Results: Pre-work for RWCS included 142 residents from 100 different training programs in the Wellness Think Tank. Participants in the actual RWCS event included 44 EM residents, five EM attendings who participated as facilitators, and three EM attendings who acted as participants. The four working groups ultimately reached a consensus on their specific objectives to improve resident wellness on both the individual and program level. 

Conclusion: The Resident Wellness Consensus Summit was a unique and novel consensus meeting, involving residents as the primary stakeholders. The summit demonstrated that it is possible to galvanize a large group of stakeholders in a relatively short time by creating robust trust, communication, and online learning networks to create resources that support resident wellness.

An Evidence-based, Longitudinal Curriculum for Resident Physician Wellness: The 2017 Resident Wellness Consensus Summit

Introduction: Physicians are at much higher risk for burnout, depression, and suicide than their non-medical peers. One of the working groups from the May 2017 Resident Wellness Consensus Summit (RWCS) addressed this issue through the development of a longitudinal residency curriculum to address resident wellness and burnout.

Methods: A 30-person (27 residents, three attending physicians) Wellness Curriculum Development workgroup developed the curriculum in two phases. In the first phase, the workgroup worked asynchronously in the Wellness Think Tank – an online resident community – conducting a literature review to identify 10 core topics. In the second phase, the workgroup expanded to include residents outside the Wellness Think Tank at the live RWCS event to identify gaps in the curriculum. This resulted in an additional seven core topics. 

Results: Seventeen foundational topics served as the framework for the longitudinal resident wellness curriculum. The curriculum includes a two-module introduction to wellness; a seven-module “Self-Care Series” focusing on the appropriate structure of wellness activities and everyday necessities that promote physician wellness; a two-module section on physician suicide and self-help; a four-module “Clinical Care Series” focusing on delivering bad news, navigating difficult patient encounters, dealing with difficult consultants and staff members, and debriefing traumatic events in the emergency department; wellness in the workplace; and dealing with medical errors and shame. 

Conclusion: The resident wellness curriculum, derived from an evidence-based approach and input of residents from the Wellness Think Tank and the RWCS event, provides a guiding framework for residency programs in emergency medicine and potentially other specialties to improve physician wellness and promote a culture of wellness.

  • 1 supplemental PDF

Identifying Gaps and Launching Resident Wellness Initiatives: The 2017 Resident Wellness Consensus Summit

Introduction: Burnout, depression, and suicidality among residents of all specialties have become a critical focus for the medical education community, especially among learners in graduate medical education. In 2017 the Accreditation Council for Graduate Medical Education (ACGME) updated the Common Program Requirements to focus more on resident wellbeing. To address this issue, one working group from the 2017 Resident Wellness Consensus Summit (RWCS) focused on wellness program innovations and initiatives in emergency medicine (EM) residency programs. 

Methods: Over a seven-month period leading up to the RWCS event, the Programmatic Initiatives workgroup convened virtually in the Wellness Think Tank, an online, resident community consisting of 142 residents from 100 EM residencies in North America. A 15-person subgroup (13 residents, two faculty facilitators) met at the RWCS to develop a public, central repository of initiatives for programs, as well as tools to assist programs in identifying gaps in their overarching wellness programs. 

Results: An online submission form and central database of wellness initiatives were created and accessible to the public. Wellness Think Tank members collected an initial 36 submissions for the database by the time of the RWCS event. Based on general workplace, needs-assessment tools on employee wellbeing and Kern’s model for curriculum development, a resident-based needs-assessment survey and an implementation worksheet were created to assist residency programs in wellness program development. 

Conclusion: The Programmatic Initiatives workgroup from the resident-driven RWCS event created tools to assist EM residency programs in identifying existing initiatives and gaps in their wellness programs to meet the ACGME’s expanded focus on resident wellbeing.

  • 2 supplemental PDFs

Patient Safety

Thromboprophylaxis for Patients with High-risk Atrial Fibrillation and Flutter Discharged from the Emergency Department

Introduction: Many patients with atrial fibrillation or atrial flutter (AF/FL) who are high risk for ischemic stroke are not receiving evidence-based thromboprophylaxis. We examined anticoagulant prescribing within 30 days of receiving dysrhythmia care for non-valvular AF/FL in the emergency department (ED). 

Methods: This prospective study included non-anticoagulated adults at high risk for ischemic stroke (ATRIA score ≥7) who received emergency AF/FL care and were discharged home from seven community EDs between May 2011 and August 2012. We characterized oral anticoagulant prescribing patterns and identified predictors of receiving anticoagulants within 30 days of the index ED visit. We also describe documented reasons for withholding anticoagulation.

Results: Of 312 eligible patients, 128 (41.0%) were prescribed anticoagulation at ED discharge or within 30 days. Independent predictors of anticoagulation included age (adjusted odds ratio [aOR] 0.89 per year, 95% confidence interval [CI] 0.82-0.96); ED cardiology consultation (aOR 1.89, 95% CI [1.10-3.23]); and failure of sinus restoration by time of ED discharge (aOR 2.65, 95% CI [1.35-5.21]). Reasons for withholding anticoagulation at ED discharge were documented in 139 of 227 cases (61.2%), the most common of which were deferring the shared decision-making process to the patient’s outpatient provider, perceived bleeding risk, patient refusal, and restoration of sinus rhythm. 

Conclusion: Approximately 40% of non-anticoagulated AF/FL patients at high risk for stroke who presented for emergency dysrhythmia care were prescribed anticoagulation within 30 days. Physicians were less likely to anticoagulate older patients and those with ED sinus restoration. Opportunities exist to improve rates of thromboprophylaxis in this high-risk population.

  • 1 supplemental file

Radial Arterial Lines Have a Higher Failure Rate than Femoral

Introduction: Arterial lines are important for monitoring critically ill patients. They are placed most commonly in either femoral or radial sites, though there is little evidence to guide site preference.

Methods: This is an ambispective, observational, cohort study to determine variance in failure rates between femoral and radial arterial lines. This study took place from 2012 to 2016 and included all arterial lines placed in adult patients at a single institution. Causes of line failure were defined as inaccuracy, blockage, site issue, or accidental removal. The primary outcome was line failure by location. Secondary outcomes included time to failure and cause of failure.

Results: We evaluated 272 arterial lines over both arms of the study. Fifty-eight lines eventually failed (21.32%). Femoral lines failed less often in both retrospective (5.36% vs 30.71%) and prospective (5.41% vs. 25.64%) arms. The absolute risk reduction of line failure in the femoral site was 20.2% (95% confidence interval [3.7 - 36.2%]). Failures occurred sooner in radial sites compared to femoral. Infection was not a significant cause of removal in our femoral cohort.

Conclusion: Femoral arterial lines fail much less often then radial arterial lines. If placed preferentially in the femoral artery, one line failure would be prevented for every fourth line.

Evaluation of a Novel Handoff Communication Strategy for Patients Admitted from the Emergency Department

Introduction: Miscommunication during inter-unit handoffs between emergency and internal medicine physicians may jeopardize patient safety.  Our goal was to evaluate the impact of a structured communication strategy on the quality of admission handoffs.

Methods: We conducted a mixed-methods, pre-test/post-test study at a 560-bed academic health center with 60,000 emergency department (ED) patient visits per year. Admission-handoff best practices were integrated into a modified SBAR format, resulting in the Situation, Background, Assessment, Responsibilities & Risk, Discussion & Disposition, Read-back & Record (SBAR-DR) model. Physician handoff conversations were recorded and transcribed for the 60 days before (n=110) and 60 days after (n=110) introduction of the SBAR-DR strategy. Transcriptions were scored by two blinded physicians using a 16-item scoring instrument. The primary outcome was the composite handoff quality score. We assessed physician perceptions via a post-intervention survey.

Results: The composite quality score improved in the post-intervention phase (7.57 + 2.42 vs. 8.45 + 2.51, p=.0085). Three of the 16 individual scoring elements also improved, including time for questions (70.6% vs. 82.7%, p=.0344) and confirmation of disposition plan (41.8% vs. 62.7%, p=.0019). The majority of emergency and internal medicine physicians felt that the SBAR-DR model had a positive impact on patient safety and handoff efficiency. 

Conclusion: Implementation of the SBAR-DR strategy resulted in improved verbal handoff quality. Agreement upon a clear disposition plan was the most improved element, which is of great importance in delineating responsibility of care and streamlining ED throughput. Future efforts should focus on nurturing broader physician buy-in to facilitate institution-wide implementation.

  • 1 supplemental file

Behavioral Health

Cannabinoid Hyperemesis Syndrome: Public Health Implications and a Novel Model Treatment Guideline

 

Introduction: Cannabinoid hyperemesis syndrome (CHS) is an entity associated with cannabinoid overuse. CHS typically presents with cyclical vomiting, diffuse abdominal pain, and relief with hot showers. Patients often present to the emergency department (ED) repeatedly and undergo extensive evaluations including laboratory examination, advanced imaging, and in some cases unnecessary procedures. They are exposed to an array of pharmacologic interventions including opioids that not only lack evidence, but may also be harmful.This paper presents a novel treatment guideline that highlights the identification and diagnosis of CHS and summarizes treatment strategies aimed at resolution of symptoms, avoidance of unnecessary opioids, and ensuring patient safety.

Methods: The San Diego Emergency Medicine Oversight Commission in collaboration with the County of San Diego Health and Human Services Agency and San Diego Kaiser Permanente Division of Medical Toxicology created an expert consensus panel to establish a guideline to unite the ED community in the treatment of CHS. 

Results: Per the consensus guideline, treatment should focus on symptom relief and education on the need for cannabis cessation. Capsaicin is a readily available topical preparation that is reasonable to use as first-line treatment. Antipsychotics including haloperidol and olanzapine have been reported to provide complete symptom relief in limited case studies. Conventional antiemetics including antihistamines, serotonin antagonists, dopamine antagonists and benzodiazepines may have limited effectiveness. Emergency physicians should avoid opioids if the diagnosis of CHS is certain and educate patients that cannabis cessation is the only intervention that will provide complete symptom relief.

Conclusion: An expert consensus treatment guideline is provided to assist with diagnosis and appropriate treatment of CHS. Clinicians and public health officials should identity and treat CHS patients with strategies that decrease exposure to opioids, minimize use of healthcare resources, and maximize patient safety.

 

Optimal Implementation of Prescription Drug Monitoring Programs in the Emergency Department

The opioid epidemic is the most significant modern-day, public health crisis. Physicians and lawmakers have developed methods and practices to curb opioid use. This article describes one method, prescription drug monitoring programs (PDMP), through the lens of how to optimize use for emergency departments (ED). EDs have rapidly become a central location to combat opioid abuse and drug diversion. PDMPs can provide emergency physicians with comprehensive prescribing information to improve clinical decisions around opioids. However, PDMPs vary tremendously in their accessibility and usability in the ED, which limits their effectiveness at the point of care. Problems are complicated by varying state-to-state requirements for data availability and accessibility. Several potential solutions to improving the utility of PDMPs in EDs include integrating PDMPs with electronic health records, implementing unsolicited reporting and prescription context, improving PDMP accessibility, data analytics, and expanding the scope of PDMPs. These improvements may help improve clinical decision-making for emergency physicians through better data, data presentation, and accessibility.

By Default: The Effect of Prepopulated Prescription Quantities on Opioid Prescribing in the Emergency Department

Introduction: Opioid prescribing patterns have come under increasing scrutiny with the recent rise in opioid prescriptions, opioid misuse and abuse, and opioid-related adverse events. To date, there have been limited studies on the effect of default tablet quantities as part of emergency department (ED) electronic order entry. Our goal was to evaluate opioid prescribing patterns before and after the removal of a default quantity of 20 tablets from ED electronic order entry.

Methods: We performed a retrospective observational study at a single academic, urban ED with 58,000 annual visits. We identified all adult patients (18 years or older) seen in the ED and discharged home with prescriptions for tablet forms of hydrocodone and oxycodone (including mixed formulations with acetaminophen). We compared the quantity of tablets prescribed per opioid prescription 12 months before and 10 months after the electronic order-entry prescription default quantity of 20 tablets was removed and replaced with no default quantity. No specific messaging was given to providers, to avoid influencing prescribing patterns. We used two-sample Wilcoxon rank-sum test, two-sample test of proportions, and Pearson’s chi-squared tests where appropriate for statistical analysis.

Results: A total of 4,104 adult patients received discharge prescriptions for opioids in the pre-intervention period (151.6 prescriptions per 1,000 discharged adult patients), and 2,464 post-intervention (106.69 prescriptions per 1,000 discharged adult patients). The median quantity of opioid tablets prescribed decreased from 20 (interquartile ration [IQR] 10-20) to 15 (IQR 10-20) (p<0.0001) after removal of the default quantity. While the most frequent quantity of tablets received in both groups was 20 tablets, the proportion of patients who received prescriptions on discharge that contained 20 tablets decreased from 0.5 (95% confidence interval [CI] [0.48-0.52]) to 0.23 (95% CI [0.21-0.24]) (p<0.001) after default quantity removal.

Conclusion: Although the median number of tablets differed significantly before and after the intervention, the clinical significance of this is unclear. An observed wider distribution of the quantity of tablets prescribed after removal of the default quantity of 20 may reflect more appropriate prescribing patterns (i.e., less severe indications receiving fewer tabs and more severe indications receiving more). A default value of 20 tablets for opioid prescriptions may be an example of the electronic medical record’s ability to reduce practice variability in medication orders actually counteracting optimal patient care.

 

Emergency Department Frequent Users for Acute Alcohol Intoxication

Introduction: A subset of frequent users of emergency services are those who use the emergency department (ED) for acute alcohol intoxication. This population and their ED encounters have not been previously described.

Methods: This was a retrospective, observational, cohort study of patients presenting to the ED for acute alcohol intoxication between 2012 and 2016. We collected all data from the electronic medical record. Frequent users for alcohol intoxication were defined as those with greater than 20 visits for acute intoxication without additional medical chief complaints in the previous 12 months. We used descriptive statistics to evaluate characteristics of frequent users for alcohol intoxication, as well as their ED encounters.

Results: We identified 32,121 patient encounters. Of those, 325 patients were defined as frequent users for alcohol intoxication, comprising 11,370 of the encounters during the study period. The median maximum number of encounters per person for alcohol intoxication in a one-year period was 47 encounters (range 20 to 169). Frequent users were older (47 years vs. 39 years), and more commonly male (86% vs. 71%). Frequent users for alcohol intoxication had higher rates of medical and psychiatric comorbidities including liver disease, chronic kidney disease, ischemic vascular disease, dementia, chronic obstructive pulmonary disease, history of traumatic brain injury, schizophrenia, and bipolar disorder. 

Conclusion: In this study, we identified a group of ED frequent users who use the ED for acute alcohol intoxication. This population had higher rates of medical and psychiatric comorbidities compared to non-frequent users.

 

Critical Care

Nasal Cannula Apneic Oxygenation Prevents Desaturation During Endotracheal Intubation: An Integrative Literature Review

Patients requiring emergency airway management may be at greater risk of acute hypoxemic events because of underlying lung pathology, high metabolic demands, insufficient respiratory drive, obesity, or the inability to protect their airway against aspiration. Emergency tracheal intubation is often required before complete information needed to assess the risk of procedural hypoxia is acquired (i.e., arterial blood gas level, hemoglobin value, or chest radiograph). During pre-oxygenation, administering high-flow nasal oxygen in addition to a non-rebreather face mask can significantly boost the effective inspired oxygen. Similarly, with the apnea created by rapid sequence intubation (RSI) procedures, the same high-flow nasal cannula can help maintain or increase oxygen saturation during efforts to secure the tube (oral intubation). Thus, the use of nasal oxygen during pre-oxygenation and continued during apnea can prevent hypoxia before and during intubation, extending safe apnea time, and improve first-pass success attempts. We conducted a literature review of nasal-cannula apneic oxygenation during intubation, focusing on two components: oxygen saturation during intubation, and oxygen desaturation time. We performed an electronic literature search from 1980 to November 2017, using PubMed, Elsevier, ScienceDirect, and EBSCO. We identified 14 studies that pointed toward the benefits of using nasal cannula during emergency intubation.

Comparison of Static versus Dynamic Ultrasound for the Detection of Endotracheal Intubation

Introduction: In the emergency department setting, it is essential to rapidly and accurately confirm correct endotracheal tube (ETT) placement. Ultrasound is an increasingly studied modality for identifying ETT location. However, there has been significant variation in techniques between studies, with some using the dynamic technique, while others use a static approach. This study compared the static and dynamic techniques to determine which was more accurate for ETT identification.  

Methods: We performed this study in a cadaver lab using three different cadavers to represent variations in neck circumference. Cadavers were randomized to either tracheal or esophageal intubation in equal proportions. Blinded sonographers then assessed the location of the ETT using either static or dynamic sonography. We assessed accuracy of sonographer identification of ETT location, time to identification, and operator confidence.

Results: A total of 120 intubations were performed: 62 tracheal intubations and 58 esophageal intubations. The static technique was 93.6% (95% confidence interval [CI] [84.3% to 98.2%]) sensitive and 98.3% specific (95% CI [90.8% to 99.9%]). The dynamic technique was 92.1% (95% CI [82.4% to 97.4%]) sensitive and 91.2% specific (95% CI [80.7% to 97.1%]). The mean time to identification was 6.72 seconds (95% CI [5.53 to 7.9] seconds) in the static technique and 6.4 seconds (95% CI [5.65 to 7.16] seconds) in the dynamic technique. Operator confidence was 4.9/5.0 (95% CI [4.83 to 4.97]) in the static technique and 4.86/5.0 (95% CI [4.78 to 4.94]) in the dynamic technique. There was no statistically significant difference between groups for any of the outcomes.

Conclusion: This study demonstrated that both the static and dynamic sonography approaches were rapid and accurate for confirming ETT location with no statistically significant difference between modalities. Further studies are recommended to compare these techniques in ED patients and with more novice sonographers.

 

  • 4 supplemental videos

Intravenous Continuous Infusion vs. Oral Immediate-release Diltiazem for Acute Heart Rate Control

Introduction: Atrial fibrillation (AF) is a common diagnosis of patients presenting to the emergency department (ED). Intravenous (IV) diltiazem bolus is often the initial drug of choice for acute management of AF with rapid ventricular response (RVR). The route of diltiazem after the initial IV loading dose may influence the disposition of the patient from the ED. However, no studies exist comparing oral (PO) immediate release and IV continuous infusion diltiazem in the emergency setting. The objective of this study was to compare the incidence of treatment failure, defined as a heart rate (HR) of >110 beats/min at four hours or conversion to another agent, between PO immediate release and IV continuous infusion diltiazem after an initial IV diltiazem loading dose in patients in AF with RVR.

Methods: This was a single-center, observational, retrospective study conducted at a tertiary academic medical center. The study population included patients ≥18 years old who presented to the ED in AF with a HR > 110 beats/min and received an initial IV diltiazem loading dose. We used multivariate logistic regression to assess the association between routes of administration and treatment failure.

Results: A total of 111 patients were included in this study. Twenty-seven percent (11/41) of the patients in the PO immediate-release group had treatment failure compared to 46% (32/70) in the IV continuous-infusion group. The unadjusted odds ratio (OR) of treatment failure with PO was less than IV at 0.4 (95% confidence interval [CI] [0.18, 0.99], p = 0.046). When we performed a multivariate analysis adjusted for race and initial HR, PO was still less likely to be associated with treatment failure than IV with an OR of 0.4 (95% CI [0.15, 0.94], p = 0.041). The median dose of PO diltiazem and IV continuous infusion diltiazem at four hours was 30 mg and 10 mg/h, respectively.

Conclusion: After a loading dose of IV diltiazem, PO immediate-release diltiazem was associated with a lower rate of treatment failure at four hours than IV continuous infusion in patients with AF with RVR.

Pay It Forward: High School Video-based Instruction Can Disseminate CPR Knowledge in Priority Neighborhoods

Introduction: The implementation of creative new strategies to increase layperson cardiopulmonary resuscitation (CPR) and defibrillation may improve resuscitation in priority populations. As more communities implement laws requiring CPR training in high schools, there is potential for a multiplier effect and reach into priority communities with low bystander-CPR rates. 

Methods: We investigated the feasibility, knowledge acquisition, and dissemination of a high school-centered, CPR video self-instruction program with a “pay-it-forward” component in a low-income, urban, predominantly Black neighborhood in Chicago, Illinois with historically low bystander-CPR rates. Ninth and tenth graders followed a video self-instruction kit in a classroom setting to learn CPR. As homework, students were required to use the training kit to “pay it forward” and teach CPR to their friends and family. We administered pre- and post-intervention knowledge surveys to measure knowledge acquisition among classroom and “pay-it-forward” participants. 

Results: Seventy-one classroom participants trained 347 of their friends and family, for an average of 4.9 additional persons trained per kit. Classroom CPR knowledge survey scores increased from 58% to 93% (p < 0.0001). The pay-it-forward cohort saw an increase from 58% to 82% (p < 0.0001).

Conclusion: A high school-centered, CPR educational intervention with a “pay-it-forward” component can disseminate CPR knowledge beyond the classroom. Because schools are centrally-organized settings to which all children and their families have access, school-based interventions allow for a broad reach that encompasses all segments of the population and have potential to decrease disparities in bystander CPR provision.

  • 1 supplemental PDF

Higher Mallampati Scores Are Not Associated with More Adverse Events During Pediatric Procedural Sedation and Analgesia

Introduction: Procedural sedation and analgesia (PSA) is used by non-anesthesiologists (NAs) outside of the operating room for several types of procedures. Adverse events during pediatric PSA that pose the most risk to patient safety involve airway compromise. Higher Mallampati scores may indirectly indicate children at risk for airway compromise. Medical governing bodies have proposed guidelines for PSA performed by NAs, but these recommendations rarely suggest using Mallampati scores in pre-PSA evaluations. Our objective was to compare rates of adverse events during pediatric PSA in children with Mallampati scores of III/IV vs. scores of Mallampati I/II. 

Methods: This was a prospective, observational study. Children 18 years of age and under who presented to the pediatric emergency department (PED) and required PSA were enrolled. We obtained Mallampati scores as part of pre-PSA assessments. We defined adverse events as oxygen desaturation < 90%, apnea, laryngospasm, bag-valve-mask ventilation performed, repositioning of patient, emesis, and “other.” We used chi-square analysis to compare rates of adverse events between groups. 

Results: We enrolled 575 patients. The median age of the patients was 6.0 years (interquartile range = 3.1,9.9). The primary reasons for PSA was fracture reduction (n=265, 46.1%). Most sedations involved the use of ketamine (n= 568, 98.8%). Patients with Mallampati scores of III/IV were more likely to need repositioning compared to those with Mallampati scores of I/II (p=0.049). Overall, patients with Mallampati III/IV scores did not experience a higher proportion of adverse events compared to those with Mallampati scores of I/II. The relative risk of any adverse event in patients with Mallampati scores of III/IV (40 [23.8%]) compared to patients with Mallampati scores of I/II (53 [18.3%]) was 1.3 (95% confidence interval [0.91-1.87]).

Conclusion: Patients with Mallampati scores of III/IV vs. Mallampati scores of I/II are not at an increased risk of adverse events during pediatric PSA. However, patients with Mallampati III/IV scores showed an increased need for repositioning, suggesting that the sedating physician should be vigilant when performing PSA in children with higher Mallampati scores.

Addition of Audiovisual Feedback During Standard Compressions Is Associated with Improved Ability

Introduction: A benefit of in-hospital cardiac arrest is the opportunity for rapid initiation of “high-quality” chest compressions as defined by current American Heart Association (AHA) adult guidelines as a depth 2-2.4 inches, full chest recoil, rate 100 -120 per minute, and minimal interruptions with a chest compression fraction (CCF) ≥ 60%. The goal of this study was to assess the effect of audiovisual feedback on the ability to maintain high-quality chest compressions as per 2015 updated guidelines.

Methods: Ninety-eight participants were randomized into four groups. Participants were randomly assigned to perform chest compressions with or without use of audiovisual feedback (+/- AVF). Participants were further assigned to perform either standard compressions with a ventilation ratio of 30:2 to simulate cardiopulmonary resuscitation (CPR) without an advanced airway or continuous chest compressions to simulate CPR with an advanced airway. The primary outcome measured was ability to maintain high-quality chest compressions as defined by current 2015 AHA guidelines.

Results: Overall comparisons between continuous and standard chest compressions (n=98) were without significant differences in chest compression dynamics (p’s >0.05). Overall comparisons between +/- AVF (n = 98) were significant for differences in average rate of compressions per minute (p= 0.0241) and proportion of chest compressions within guideline rate recommendations (p = 0.0084). There was a significant difference in the proportion of high quality-chest compressions favoring AVF (p = 0.0399). Comparisons between chest compression strategy groups +/- AVF were significant for differences in compression dynamics favoring AVF (p’s < 0.05). 

Conclusion: Overall, AVF is associated with greater ability to maintain high-quality chest compressions per most-recent AHA guidelines. Specifically, AVF was associated with a greater proportion of compressions within ideal rate with standard chest compressions while demonstrating a greater proportion of compressions with simultaneous ideal rate and depth with a continuous compression strategy.

Erratum

This Article Corrects: “GLASS Clinical Decision Rule Applied to Thoracolumbar Spinal Fractures in Patients Involved in Motor Vehicle Crashes”

Introduction: There are established and validated clinical decision tools for cervical spine clearance. Almost all the rules include spinal tenderness on exam as an indication for imaging. Our goal was to apply GLASS, a previously derived clinical decision tool for cervical spine clearance, to thoracolumbar injuries. GLass intact Assures Safe Spine (GLASS) is a simple, objective method to evaluate those patients involved in motor vehicle collisions and determine which are at low risk for thoracolumbar injuries.

Methods: We performed a retrospective cohort study using the National Accident Sampling System-Crashworthiness Data System (NASS-CDS) over an 11-year period (1998-2008). Sampled occupant cases selected in this study included patients age 16-60 who were belt-restrained, front- seat occupants involved in a crash with no airbag deployment, and no glass damage prior to the crash.

Results: We evaluated 14,191 occupants involved in motor vehicle collisions in this analysis. GLASS had a sensitivity of 94.4% (95% CI [86.3-98.4%]), specificity of 54.1% (95% CI [53.2-54.9%]), and negative predictive value of 99.9% (95% CI [99.8-99.9%]) for thoracic injuries, and a sensitivity of 90.3% (95% CI [82.8-95.2%]), specificity of 54.2% (95% CI [53.3-54.9%]), and negative predictive value of 99.9% (95% CI [99.7-99.9%]) for lumbar injuries.

Conclusion: The GLASS rule represents the possibility of a novel, more-objective thoracolumbar spine clearance tool. Prospective evaluation would be required to further evaluate the validity of this clinical decision rule. [West J Emerg Med. 2017;18(6)1108-1113.]