Abstract

Background. Patient satisfaction surveys have become increasingly important as their results help to determine Centers for Medicare and Medicaid Services (CMS) reimbursement. However, these questionnaires have known sources of bias (self-selection, responder, attribution, and nonresponse). Objective. We developed a real-time (RT) survey delivered in the hospital ED to evaluate the effect of implementing RT patient satisfaction surveys on physician behavior and hypothesized that the timing of patient satisfaction survey delivery would significantly impact the results. Method. Data from real-time patient satisfaction surveys were collected in phases from 12/2015 to 5/2017. Hospital-sponsored (HS) surveys were administered after discharge from 12/2015 to 12/2016. Results. For RT surveys, resident physicians were significantly more likely to write their names on the whiteboard () and sit down () with patients. Behavior modifications by attending physicians were not significant. Patient satisfaction measures did not improve significantly between periods for RT or HS surveys; however, RT survey responders were significantly more likely to recommend the ED to others. Conclusion. The timing of survey administration did significantly alter resident physician’s behavior; however, it had no effect on patient satisfaction scores. RT responders were significantly more likely to recommend the emergency department to others.

1. Introduction

Using patient satisfaction metrics to manage the hospital physician workforce began with the founding of Press Ganey™ in 1985 [1]. In 2006, patient satisfaction surveys became standard procedure with the development and implementation of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey [2]. Patient satisfaction surveys have become increasingly important since 2010 with the Patient Protection and Affordable Care Act, in which survey results help to determine hospital and emergency department (ED) reimbursement by the Centers for Medicare and Medicaid Services (CMS) [3]. However, these questionnaires have several known sources of bias including self-selection bias, responder bias (the person filling out the questionnaire may not be the patient), attribution bias (the wrong physician is attributed with the behavior), and nonresponse bias (when a portion of the surveyed population does not respond to the survey) [4, 5]. Additionally, little evidence demonstrates that patient satisfaction surveys actually impact physician behavior or improve patient care [4, 6].

In an attempt to evaluate the performance of physicians and staff and reduce these above important sources of bias, we developed a real-time survey delivered in the hospital ED. We aimed to evaluate the effect of implementing real-time patient satisfaction surveys on physician behavior in an academic ED. Further, we hypothesized that the timing of patient satisfaction survey delivery would significantly impact the results.

2. Materials and Methods

This is a cross-sectional, convenience sample of patients presenting to a single academic ED. Data from real-time (RT) patient satisfaction surveys were administered to English-speaking patients in several phases from December 1, 2015, through May 31, 2017.

Resident and attending physician providers were unaware of the study data collection during the preannouncement period (Period 1) from December 1, 2015, through March 1, 2016. After this initial period, the survey study protocol was announced to both groups of physician providers over two days via electronic mail, in person at resident educational conferences, in person at faculty meeting, and with individual reports, which were sent to each physician providing them with feedback on their results compared to overall peer group performance. Additionally, resident physicians reviewed their individualized reports with their residency director at a mandatory biannual review meeting in June 2016. Data were collected for an additional three months during Period 2 (March 3, 2016, through May 31, 2016) with email reminders and cohort data distribution at months two and three and reminders that patient satisfaction and physician interactions were being monitored. Finally, data were collected during Period 3, from June 1, 2016, to May 31, 2017, without any reminders, individualized reports, or meetings with residents; however, the study was discussed with the incoming 2016 intern class during this time.

Hospital-sponsored (HS) surveys were also administered from December 2015 to December 2016 powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation by National Research Corporation (NRC) Health and were designed to meet the HCAHPS criteria set by CMS. These surveys were administered by mail or telephone after discharge, and their existence was well known to hospital staff and throughout its administration. RT survey data were collected using a Qualtrics (http://www.qualtrics.com) survey delivered on mobile devices once the patient’s disposition status changed to admit, discharge, or transfer on the electronic medical record (EMR) system tracking board. Trained undergraduate student research volunteers administered surveys as part of the University of Arizona Research Associate Program (RAP). The surveys evaluated specific physician interactions, overall care, and likelihood to recommend the ED to a family member or friend for treatment, which were modeled after an existing HS survey. Questions regarding specific physician interactions used the AIDET (acknowledge, introduce, duration, explanation, and thank you) communication model as a framework to create actionable items for improvement [4]. Patients were asked to rate their separate interactions with their resident and attending physicians as “great,” “good,” “okay,” or “not good.” If overall care was not rated as “great,” patients were asked to identify why by selecting reasons from a list of options, including “better communication about plan of care,” “more frequent visits to check on the patient,” “better bedside manner,” and/or “more time spent in the room with the patient.” Patients were also asked to evaluate residents and attending physicians separately on the following parameters: “Did the doctor introduce him/herself?,” “Did the doctor sit down during the visit?,” and “Did the doctor say “thank you”?” (see Supplementary materials (available here) for complete survey questions). The survey also asked “Was the name of the doctor written on the whiteboard?,” which was answered by the RAP student in the patient room by directly visualizing the whiteboard. Finally, patients were asked if they would recommend the hospital to a family member or friend. Data gathered from this final question of the RT survey were compared to the HS survey data asking a similar question during each month from December 2015 to December 2016. Months with fewer than 20 RT surveys were excluded from this analysis.

For no/yes (0, 1) dichotomous variables, the mean percentage calculated represents the percentage of subjects who answered that question positively. For these dichotomous outcomes, Fisher’s exact test was conducted to test for statistically significant differences in proportions over the three periods of analyses. Fisher’s exact test was utilized to address the small sample size and unequal distribution among some cells. Mean percentages were also calculated for all other categorical data. All tests were two-sided, and the level of significance was set at α = 0.05. Statistical analyses were performed using Stata 14 (StataCorp, College Station, TX).

3. Results

Data from 828 HS surveys collected between December 2015 and December 2016 were provided by NRC Health for comparison with RT survey data collected during the same period. In total, 481 RT surveys were collected from December 2015 to May 2017. 124 RT surveys were collected during the three-month preannouncement Period 1, 186 RT surveys were collected during the three-month postannouncement Period 2 with reminders, and 171 RT surveys were collected for another year without any reminders, individualized reports, or meetings with residents during Period 3. Surveys in which the incorrect physician was selected and surveys of the authors were removed, leaving 119 RT in Period 1, 145 RT surveys in Period 2 with reminders, and 162 RT in Period 3, for a total of 426 surveys available for analysis.

3.1. Physician Interactions (RT Survey Only)

In comparison to Period 1, resident physicians improved in all measured interactions during postannouncement Periods 2 and 3 and were significantly more likely to write their names on the whiteboard and sit down while interacting with their patient (Table 1). During this same period, attending physicians showed a nonsignificant increase in each interaction, though adherence to saying thank you decreased in Periods 2 and 3. When examining trends between Periods 1, 2, and 3, significantly improved postannouncement habits persisted in the resident physicians. Improved habits, aside from sitting down, were lost in the attending physicians when reminders were not provided in Period 3 (Table 1).

3.2. Patient Satisfaction (RT and HS Surveys)

In general, RT surveys showed that patients were very satisfied with both their overall medical care and physician interactions though patient satisfaction regarding these factors did not demonstrate statistically significant changes between Periods 1, 2, or 3 (Figures 1 and 2). Comparison of all three periods also failed to find a significant difference in “good” or “great” overall physician care (96.58% vs. 97.21% vs. 96.03%, and 96.61% vs. 96.55% vs. 93.38%, ). When patients did not rate their overall medical care as “great,” the most commonly cited suggestion was “better communication of plan of care,” while for physician care, the most commonly cited suggestion was “more frequent visits” to check on the patient.

Data from RT surveys found no difference in likelihood to recommend the ED to a family member or friend for treatment between Periods 1, 2, and 3 (97.48% vs. 95.17% vs. 93.84%, ), suggesting that altered physician behaviors did not impact overall patient satisfaction. Similarly, reported likelihood to recommend the ED on the HS survey did not change between available data in Periods 1 and 2 (68.94% vs. 68.47, ).

Finally, when compared to the HS survey collected in the same time, RT survey responders were significantly more likely to recommend the ED to a family member or friend (Table 2).

4. Discussion

Modern medicine places many competing pressures on clinicians to comply with the newest clinical pathways, CMS guidelines, and other local institutional policies and protocols. Promoting patient satisfaction by changing their behavior has traditionally been one of the most difficult areas for physicians to address [7]. Many emergency medicine providers have actively resisted education on improving patient satisfaction, with feelings that their duty is to provide appropriate and necessary medical care and not to focus on the patients as customers.

Despite reluctance to actively embrace patient satisfaction metrics and the noted self-selection, responder, attribution, and nonresponse biases in the delayed surveying methodology, the HCAHPS patient satisfaction survey results have been used as a component of value-based incentive payments in the CMS Hospital Value-Based Purchasing since 2010 [35]. Administration of surveys in real-time removes many of these biases and may provide more accurate measures of patient satisfaction, resulting in improved reimbursement by CMS and a more realistic view of the patient experience.

In this study, we aimed to evaluate the effect of implementing a real-time patient satisfaction survey on physician behavior in an academic ED and aimed to determine if removing many of the sources of bias inherent to the HS survey would affect the rates of patient satisfaction scores. While administration of RT patient satisfaction surveys did alter physician behaviors believed to improve patient satisfaction, it did not improve overall patient satisfaction rates. The available and official hospital-sponsored patient satisfaction rates also did not significantly change. This is not unexpected, given the sparse evidence that patient satisfaction surveys alter physician behavior or patient care [4, 6].

Our study found the likelihood to recommend the ED was consistently significantly higher using the real-time patient satisfaction surveys in comparison with mail or telephone hospital patient satisfaction surveys conducted after discharge. We believe this is due to the reduction in self-selection, responder, attribution, and nonresponse biases inherent to delayed survey methodology; however, further studies are needed to identify possible confounding factors, including the effect of receiving hospital billing in the interim for delayed surveying methods.

5. Limitations

This study has several limitations. The real-time patient satisfaction surveys were only conducted when the RAP students were available. For example, RAP students were away on summer break during the first three months of Period 3. This likely contributes to the decrease in the total number of surveys conducted in Period 3. Additionally, RAP students were often present more during the day than at night, given their educational commitments. While patients arriving to the ED are generally not preferential to the time of day in which they seek treatment, we cannot exclude the possibility that patients surveyed during the day are inherently different than those who would be surveyed at night.

Surveys were only conducted in English, excluding a significant portion of patients who seek care at our ED. Those with altered mental status were not surveyed as they would be unable to participate in survey questions. As such, our patient cohort from which our data are gathered truly represents a convenience sample. We also could not adequately account for the transitions of care in our survey. RAP students identified the resident and attending physicians who were assigned to the patient surveyed at the time their disposition changed to admit, discharge, or transfer. These providers may not have provided the majority of the patient’s care. Thus, the data gathered on these providers may have in fact been more representative of the previous treatment team, leading to a mislabeling of individualized data. Cohort data, however, would not have differed and still provided an appropriate basis for comparison before and after survey existence announcement. Notably, our study was not powered enough to show statistically significant increases for many of our questions. Required offsite rotations for emergency medicine residents limited our survey sample size per resident, and thus, we were unable to provide individualized analyses of behavioral changes before and after individualized feedback. If this study were performed in at a nonacademic, community hospital, we suspect the less dynamic environment, consistent physician scheduling, financial incentives, and contract extensions based on patient satisfaction scores may yield different results. With high satisfaction rates at baseline, many more surveys would need to be collected to identify which provider behaviors lead to meaningful increases in patient satisfaction. It is possible that the increased patient satisfaction when surveyed in the ED was a result of the patient doubting that they would be truly anonymous. Thus, they may have given higher scores than they would on other more clearly anonymous scoring systems.

Finally, while RT surveys’ question selection was modeled after HS surveys, they differed in their question format. The RT surveys generally utilized a Likert scale while HS surveys formatted questions categorically with yes/no answers. Comparison of RT versus HS survey data was a limited comparison of categorical questions. Additionally, our RT surveys were administered to both patients who were being admitted and discharged, as opposed to HS surveys, which are asked of only admitted patients. Only December 2015 to December 2016 HS survey data were available, limiting our comparison to RT survey data to this time period.

6. Conclusion

While awareness of real-time patient satisfaction surveys did affect physician behavior and may have improved adherence to patient satisfaction protocols, these improved interactions had no effect on likelihood to recommend the ED for either the survey method between the pre and postannouncement periods. Real-time survey responders were significantly more likely to recommend the ED to a family member or friend than those who completed the delayed hospital-sponsored survey via mail or telephone during the same time period.

Data Availability

The study data used to support the findings of this study are included within the article.

Disclosure

This work was presented at Western Society for Academic Emergency Medicine in Stanford, CA, on April 7‐8, 2017.

Conflicts of Interest

The authors have no conflicts of interest.

Acknowledgments

Special thanks are due to Arizona Emergency Medicine Research Center, Research Associate Program at the University of Arizona, and Academy of Medical Education Scholars, University of Arizona College of Medicine.

Supplementary Materials

Real-time patient satisfaction survey. (Supplementary Materials)