Effect of an Educational Intervention on Medical Student Scripting and Patient Satisfaction: A Randomized Trial

Introduction Effective communication between clinicians and patients has been shown to improve patient outcomes, reduce malpractice liability, and is now being tied to reimbursement. Use of a communication strategy known as “scripting” has been suggested to improve patient satisfaction in multiple hospital settings, but the frequency with which medical students use this strategy and whether this affects patient perception of medical student care is unknown. Our objective was to measure the use of targeted communication skills after an educational intervention as well as to further clarify the relationship between communication element usage and patient satisfaction. Methods Medical students were block randomized into the control or intervention group. Those in the intervention group received refresher training in scripted communication. Those in the control group received no instruction or other intervention related to communication. Use of six explicit communication behaviors were recorded by trained study observers: 1) acknowledging the patient by name, 2) introducing themselves as medical students, 3) explaining their role in the patient’s care, 4) explaining the care plan, 5) providing an estimated duration of time to be spent in the emergency department (ED), and 6) notifying the patient that another provider would also be seeing them. Patients then completed a survey regarding their satisfaction with the medical student encounter. Results We observed 474 medical student-patient encounters in the ED (231 in the control group and 243 in the intervention group). We were unable to detect a statistically significant difference in communication element use between the intervention and control groups. One of the communication elements, explaining steps in the care plan, was positively associated with patient perception of the medical student’s overall communication skills. Otherwise, there was no statistically significant association between element use and patient satisfaction. Conclusion We were unable to demonstrate any improvement in student use of communication elements or in patient satisfaction after refresher training in scripted communication. Furthermore, there was little variation in patient satisfaction based on the use of scripted communication elements. Effective communication with patients in the ED is complicated and requires further investigation on how to provide this skill set.


Design and Setting
This was a randomized controlled trial conducted between July 2014 and April 2015 in the EDs of two urban teaching hospitals affiliated with the Indiana University School of Medicine. The Sidney and Lois Eskenazi Hospital (Hospital A) is a county hospital with approximately 100,000 patient visits annually. Indiana University Health Methodist Hospital (Hospital B) is a tertiary referral center, also with approximately 100,000 patient visits annually. The study was approved by the Indiana University Institutional Review Board.

Participants
Fourth-year medical students were enrolled on a volunteer basis and provided written consent at the orientation to their emergency medicine (EM) clerkship, a required 4-week clinical course at Indiana University School of Medicine. There was no incentive for participation. Study information was given and consent was obtained by an EM resident who was not responsible for their grade. Students participating in the study were informed that they would be observed while on shift in the ED but were otherwise kept blind as to what was being observed.
Patients who could provide verbal consent (>18 years old or had a parent present to consent) in English or Spanish and who were evaluated by a participating medical student were given the option to participate in a patient satisfaction survey. Surveys were not administered to patients with the following conditions: incarcerated, altered mental status, psychiatric

Outcome Measures
Six communication elements were previously chosen for observation as outlined in our pilot study. 13 The elements are shown in Table 2. They are based on AIDET ® , a patient communication framework by The Studer Group. We assessed patient satisfaction through the same four-part survey used in that study (Appendix A). The primary outcome of interest was change in the frequency of "yes" responses to questions about likelihood to return to the ED or likelihood to refer a loved one to the ED. Secondary outcomes of interest included frequency of use of each of the six elements, improvement in the patient's perception of the student's overall communication skill, and improvement in score on the Communication Assessment Tool (CAT). The CAT is a previously validated instrument that assesses interpersonal and communication skills using a 15-item survey with a five-point Likert scale (1 = poor, 2 = fair, 3 = good, 4 = very good, 5 = excellent). 14 We modified the survey by removing one question, "The doctor's staff treated me with respect," to keep focus on the studentpatient interaction rather than the patient's overall experience.

Observers and Study Procedure
Four observers, all students in the pre-medical program at Indiana University-Purdue University Indianapolis, were trained by study investigators to navigate participating EDs and record elements of patient-student interactions on a data collection form. Data collection forms included whether or not the student used each of the six communication elements as well as whether the student performed 17 additional "dummy" data points, which were chosen by study investigators as actions commonly performed by students. These were added to keep the student and observers blind to what elements were of interest for the study. Refer to Appendix B for the complete data collection sheet with all "dummy" data points.
As part of their training, the four observers viewed 31 simulated video recordings of interactions between a patient and a provider and marked whether the provider used each of the six communication elements of interest as well as whether they performed each of the 17 "dummy" data points. Responses for each of the observers were compared to "criterion standard" responses from a fifth observer, the Masters of Public Health student who had performed all observations in our previous study. 13 We calculated agreement of the observers with the criterion standard as kappa and percent agreement.
Each month, the four observers were scheduled for a variety of shifts across multiple days and times. For each shift, the observer was assigned to follow 1-3 participating medical Did the student acknowledge the patient using the patient's name?
Did the student introduce himself/herself by name?
Did the student explain his/her role as a medical student?
Did the student explain some of the steps (including diagnostic testing, medication administration, or observation) that would be used to address the patient's complaint?
Did the student explain that additional providers (such as a resident or attending physician) would also be evaluating the patient?
Did the student offer an estimated duration of time that the patient would spend in the ED? † For estimated duration, a general statement of time (e.g. ,"overnight" or "a few hours") was considered acceptable; a specific number was not required.
students. Observers followed their assigned students and completed the data collection sheet for each patient encounter. After the student-patient encounter but before discharge or admission, the observer returned to the patient's room and verbally administered the patient satisfaction survey. At this time, the observer presented the patient with a picture of the student and stressed that the questions applied specifically to the patient's interaction with that student and not other aspects of the patient's care in the ED. The satisfaction survey was done without the students' knowledge.
Following each shift, all data from the data collection forms and associated patient satisfaction surveys were stored in RedCap. 15 REDCap (Research Electronic Data Capture) is a secure, web-based application designed to support data capture for research studies.

Power Analysis
The length of this study was determined by the usage of communication elements in our pilot study as well as data provided by hospital administration on expected baseline patient satisfaction. We estimated from this data that the baseline rate of "yes" responses would be between 50-60% for Hospital B and 30-40% for Hospital A. We recognized this value would fluctuate month to month, but the randomized design and the fact that intervention and control subjects would be studied in back-to-back months would help control for that variance. With 20 students rotating at the study sites per month and >100,000 visits annually at each ED, preliminary power calculation estimates with α=0.05, an effect size of 10%, change in score from 45% to 55% between groups and N=750 encounters per group yielded a power of 97%.

Data Analysis
We used chi-square test (p<0.05 significant) to test the bivariate association of communication elements with likelihood to return, likelihood to refer, and excellent overall communication skill. Two-tailed t-tests and chi-square tests were used to determine if student characteristics differed by randomization group. We used chi-square tests to determine if the dichotomous items (each of the six communication elements, referral to ED, return to ED, and excellent overall communication) differed by randomization group, while two-tailed t-tests were used to determine if the overall CAT score differed by the intervention.
Since multiple assessments were done on each student, we also performed mixed effects regressions (logistic for dichotomous outcomes and linear for continuous outcomes) to account for repeated measures across students. For these models, intervention was included as the only fixed effect, while a random effect for student was included to account for repeated measurements across students. Additionally, we ran models adjusting for student characteristics (gender, age, intended specialty, and rotation site). Results were similar; therefore, we only report those results with no adjustment. All analyses were performed using SAS v9.4.

RESULTS
During the simulated encounters used for observer training, there was high level of agreement between the four observers for each of the six primary data points (Appendix C).

Demographics
Eighty medical students were observed during the eight-month study period. One student declined to participate. Forty-five of the students were male. Twentynine planned to pursue emergency medicine (EM), and 51 planned to pursue other specialties (including anesthesiology, family medicine, general surgery, internal medicine, neurology, neurosurgery, obstetrics-gynecology, otolaryngology, orthopedic surgery, pathology, psychiatry, radiology, other surgical specialty, other non-surgical specialty, and multiple/unsure). There was no statistically significant difference between the groups in terms of the percentage of students pursuing a specialty in EM (p = 0.062). Four hundred seventy-four medical student-patient encounters were observed (231 in the control group and 243 in the intervention group). All observations that were begun were completed. Table 3 provides additional characteristics of the observed students.

Communication Element Use
Data for the use of communication elements in the control and intervention groups is shown in the Figure. The most frequently used element in both the control and intervention groups was the student introducing himself or herself by name, which occurred during 96.1% and 97.9% of encounters in the control and intervention groups, respectively. The least frequently used element was providing the patient with an expected duration of stay, which occurred during 11.3% and 13.1% of encounters in the control and intervention groups, respectively.  Table 3. Characteristics of med students who participated in an eightmonth study of patient satisfaction with student communication.
significant association between the intervention group and use of the explaining role element was possibly due to chance given the number of outcomes analyzed and lost significance in the mixed-effects model.
Interestingly, baseline medical student (non-intervention) use of all communication elements in this study was much higher than in our previous study. Such a high baseline use of scripting may have contributed to the failure of the intervention to increase usage above that baseline rate. The reason for this increased utilization is unclear. To our knowledge, medical students did not receive any new formalized communication training in comparison to the previous study group, and observer training was also unchanged. It is possible that increased emphasis on communication throughout the medical school has resulted in improved modeling of good communication by faculty and teachers, or that medical student admissions processes have adapted to address communication skills among those accepted to the school. Additionally, the higher than anticipated baseline use of elements certainly affected the power of our study as we used much lower rates in our power analysis.
Our previous study found a strong association between use of several of the communication elements and increased rates of patient satisfaction as measured by our selected outcomes. The current study did not confirm this association. Only one element-outcome pair, "Explain-Overall Communication Skill" maintained statistical significance in this study. With 18 element-outcome pairs, it is possible that this single association occurred by chance. However, the fact that this "Explain-Overall Communication Skill" pair was also significant in our pilot study raises the possibility that this represents a result of the intervention rather than a chance event.

Medical Student Scripting and Patient Satisfaction
Pettit et al.   Table 5. Association of intervention with patient satisfaction outcomes.
ED, emergency department; SD, standard deviation. * Mixed effect model only contained a fixed effect for intervention group and a random effect for student.

Pettit et al. Medical Student Scripting and Patient Satisfaction
The other statistically significant associations found in the pilot study lost their significance in the current study. Two of the significant associations from the pilot study, the "Acknowledge-Refer" and "Acknowledge-Overall Communication Skill" pairs showed a small trend toward a positive association in the current study. It is possible that significance was lost due to much higher element use across the board, making it more difficult to show a difference.
In the current study, patient satisfaction scores were not significantly improved in students randomized to our intervention. This is not surprising given the failure of the intervention to significantly increase student use of most of the scripted elements that were emphasized. Our intervention was brief, and it is possible that a more robust intervention might have increased the use of scripted elements. However, it is still unknown if this would have had a positive effect on patient satisfaction. Even if there is some effect of the use of scripted communication elements on satisfaction, our current results suggest that the magnitude of this effect seems to be small.
The most likely explanation for the failure of this study to show an association between the selected scripted communication elements and patient satisfaction is that improving patient satisfaction is a multifactorial construct and the contribution of adding scripted communication elements is very small. Using scripted communication as a strategy to improve patient satisfaction is only a small piece of a much larger puzzle. Scripted communication may help providers remember a baseline level of communication, and this study does not indicate that initial training in scripted communication is not valuable. However, our study indicates refresher training in scripting itself is not enough to improve communication beyond a baseline level. The effect of refresher training and of scripted communication in general may also be influenced by experience and level of training, and it is possible that different results would be obtained with different levels of providers. Future research should focus beyond a simple communication checkbox. Perhaps there would be benefit with interventions that help providers better understand the patient's perspective, experience, and expectation.

LIMITATIONS
There were several limitations to this study. The study group consisted of a sample of medical students from a single medical school. While we attempted to blind the students to the nature of the study, the Hawthorne effect resulting from the knowledge that they were being observed may have contributed to increased use of all communication elements in both groups, limiting our ability to show a difference between groups. Also, while we took measures to avoid the intervention group influencing the control group (such as holding the intervention at clinical site orientation rather than the clerkship orientation), there is no guarantee that the groups did not communicate about the intervention.
Additionally, the study is limited by the lack of explicit testing of the validity of the outcome measures. The patient satisfaction survey is similar to actual surveys that are widely used in hospital systems like ours, and the CAT tool has been previously validated for other purposes. However, both tools were modified for the purposes of our study, which could threaten their validity. Finally, although we stressed to the patient that the survey pertained only to their encounter with the student, it is possible that other aspects of their visitincluding interactions with other providers -influenced survey results. It is also likely that other unmeasured verbal and non-verbal aspects of communication may have influenced results. We were also not able to control medical student exposure to other forms of communication education and did not examine medical student retention of the information covered during our education intervention.

CONCLUSION
We hypothesized that an educational intervention to increase use of scripted communication elements would result in increased patient satisfaction. Unfortunately, our intervention did not result in any increase in either use of scripting by students or patient satisfaction. Additionally, this study failed to confirm earlier findings of an association between scripted communication elements and patient satisfaction. Communicating effectively with patients is likely much more complex than using a sample of scripted communication elements, and further research on optimizing patient-provider communication is urgently needed.