Table of Contents Author Guidelines Submit a Manuscript
Canadian Journal of Gastroenterology and Hepatology
Volume 2016 (2016), Article ID 6982739, 7 pages
http://dx.doi.org/10.1155/2016/6982739
Research Article

Pilot Validation Study: Canadian Global Rating Scale for Colonoscopy Services

1Division of Gastroenterology, McGill University and McGill University Health Center, Montreal, QC, Canada
2Division of Clinical Epidemiology, Research Institute of the McGill University Health Centre, Montreal, QC, Canada

Received 22 December 2015; Accepted 5 September 2016

Academic Editor: Geoffrey C. Nguyen

Copyright © 2016 Stéphanie Carpentier et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background. The United Kingdom Global Rating Scale (GRS-UK) measures unit-level quality metrics processes in digestive endoscopy. We evaluated the psychometric properties of its Canadian version (GRS-C), endorsed by the Canadian Association of Gastroenterology (CAG). Methods. Prospective data collection at three Canadian endoscopy units assessed GRS-C validity, reliability, and responsiveness to change according to responses provided by physicians, endoscopy nurses, and administrative personnel. These responses were compared to national CAG endoscopic quality guidelines and GRS-UK statements. Results. Most respondents identified the overarching theme each GRS-C item targeted, confirming face validity. Content validity was suggested as 18 out of 23 key CAG endoscopic quality indicators (78%, 95% CI: 56–93%) were addressed in the GRS-C; statements not included pertained to educational programs and competency monitoring. Concordance ranged 75–100% comparing GRS-C and GRS-UK ratings. Test-retest reliability Kappa scores ranged 0.60–0.83, while responsiveness to change scores at 6 months after intervention implementations were greater () in two out of three units. Conclusion. The GRS-C exhibits satisfactory metrics, supporting its use in a national quality initiative aimed at improving processes in endoscopy units. Data collection from more units and linking to actual patient outcomes are required to ensure that GRS-C implementation facilitates improved patient care.

1. Introduction

Since 2010, all Canadian provinces have either announced or started implementing organized CRC screening. The increase in colonoscopy volume coupled with the variability in colonoscopy service quality across sites has ignited a movement for quality assurance [14]. Current CRC screening guidelines emphasize quality in colonoscopy, and the Canadian Association of Gastroenterology (CAG) began a quality program in endoscopy in 2012-2013 [5]. Central to the CAG’s program is the Global Rating Scale (GRS), an endoscopy quality improvement tool that was developed in 2005 in the United Kingdom (UK). This 12-item GRS-UK questionnaire was developed following meetings with endoscopy staff [6, 7] who were instructed to consider areas that would be important for a patient undergoing endoscopy. The GRS program offers endoscopy facilities the ability to evaluate the quality of their services according to a routine schedule and to then evaluate the effects of targeted quality improvement interventions. The GRS-UK has proven effective in improving endoscopy services, and while no formal validation studies for GRS-UK have been performed, some groups in the UK, Netherlands, and Scotland have attempted to validate patient involvement in the GRS [8, 9]. Experts in Canada were concerned that the tool may not be relevant to the Canadian public or the Canadian health care system, because the quality items were generated by health professionals in the UK who work in and whose patients are served by a different healthcare system. Thus, a Canadianized version of the GRS (GRS-C) was created [10]. As of July 2015, 109 sites participate in this national GRS-C quality initiative, as part of a concerted nation-wide quality initiative.

Similar to the UK version of the GRS, the GRS-C measures two domains: clinical quality and quality of patient experience. The clinical quality domain includes six items: appropriateness, information/consent, safety, comfort, quality of the procedure, and communicating results. The quality of patient experience domain also includes six items: equality, timeliness, booking and choice, privacy and dignity, aftercare, and ability to provide feedback. Each of these items, in turn, includes a series of graduated statements, and based on the response to these graduated statements, the endoscopy suite is scored on a scale that ranges from A to D (A being the highest and D being the lowest scores).

We sought to examine the psychometric properties of the GRS-C: specifically, validity (face, content, and construct), test-retest reliability, and responsiveness to change.

2. Methods

2.1. Participating Sites

A multisite prospective cohort study was undertaken in endoscopy facilities at the Royal Victoria and Montreal General Hospitals (of the McGill University Health Centre) in Montreal and the Queen Elizabeth II Health Sciences Centre in Halifax, Nova Scotia (see Table 1 for characteristics of participating sites).

Table 1: Description of the hospital centres included in the study.
2.2. Study Population

A staff committee at each site was convened to complete the GRS-C that comprised an endoscopist, nurse, administrative staff, endoscopic technical assistant, and a representative from the management team. The members of the committee remained constant throughout the study and were experienced in completing the GRS, which had already been adopted at the time of study inception. These staff committees completed all questionnaires on psychometric testing except for face validity, in which a separate staff committee was recruited to complete face validity questionnaires. Recruitment of the face validity group was based on lack of familiarity with the GRS tool and availability.

2.3. Validity and Reliability Testing
2.3.1. Face Validity

The statements from each domain were isolated without accompanying descriptive information. We then asked the participants to write, in one sentence or less, what overarching theme they thought the statements intended to measure.

2.3.2. Content Validity

We systematically examined GRS-C items to ensure they included accepted key elements of a quality colonoscopy experience [4]. Two members of the research team (NS and SC) compared the GRS-C to the content of the “Canadian Association of Gastroenterology (CAG) Consensus guidelines on safety and quality indicators in endoscopy” [1], and disagreements were resolved through an independent third party. We also examined the percentage of statements in common for each item in the GRS-C and the reference GRS-UK.

2.3.3. Construct Validity

We looked at the degree to which the GRS-C measured the quality aspects being investigated in two ways. First, at all sites, the staff committee completed the GRS-C and the GRS-UK on the same day, and scores of the GRS-C items were compared to those of the reference GRS-UK. Second, at one site, we looked at patient outcomes data that related to GRS-C statements. We compared responses of repeat GRS-C administration against actual patient experience data collected simultaneously. Patient experience data was extracted from a “patient satisfaction survey” administered to every 5 colonoscopy patients until a total of 500 surveys were administered. Surveys were given to every 5th patient by units nursing staff, to be completed at home and returned anonymously in provided prestamped envelopes. 272 were returned for a response rate of 54% (sample patient satisfaction survey available on GRS-C website) [11].

2.3.4. Test-Retest Reliability

For reliability testing, we examined whether responses to the GRS-C were consistent when administered under consistent conditions. Staff committee members completed the GRS-C at time zero and again two weeks later, without changing any aspect of endoscopic services delivery. The staff committees were blinded to the purpose of the retest.

2.3.5. Responsiveness to Change

At the time of GRS-C completion, deficiencies were identified and one or more action plans were created locally to address site-specific deficiencies. Six months following the implementation of the action plan, the GRS-C was completed by the staff committees. GRS-C scores before and following planned implementation of these changes were compared.

2.3.6. Statistical Analysis

Face validity was analyzed qualitatively by comparing each of the staff committee member’s response to the known domain theme. Discrepancies were noted.

Content validity was evaluated as the proportion of the CAG Consensus guidelines on safety and quality indicators in endoscopy that were represented in the GRS-C statements and the percent overlap with the reference GRS-UK. Construct validity was assessed by comparing the overall grade (A-D) for the 12-item scores of the GRS-C with those of the reference GRS-UK when both were administered at the same time. Comparisons of selected endoscopy unit outcomes corresponding to distinct GRS-C statements were carried out. Descriptive statistics included proportions with their corresponding 95% confidence intervals.

Reliability was assessed using Kappa scores calculated on the 12-item scores of the GRS-C administered at baseline and 2 weeks. Responsiveness to change was assessed using the McNemar chi-square test for paired data calculated, comparing the individual 12-item ratings of the GRS-C administered at baseline and 6 months following improvement interventions.

3. Results

3.1. Face Validity

As outlined in Table 2, for the twelve groups of statements, the majority of participants correctly identified the intended overarching theme.

Table 2: Face validity.
3.2. Content Validity

Of the 23 key quality indicators identified in the CAG Consensus guidelines on safety and quality indicators in endoscopy, 18/23 (78%, 95% CI (58; 90)), were addressed in the GRS-C. The GRS-C did not evaluate education and monitoring of trainees within the endoscopy suite or education of staff nor did it evaluate criteria for maintaining endoscopist privileges (Table 3).

Table 3: CAG Consensus guidelines on safety and quality indicators in endoscopy.

When the content of the GRS-C and the GRS-UK was compared, 9 of the 12 GRS-C items had greater than/equal to 70% content overlap in statement content. Appropriateness, communicating results, and equality of access were the 3 items that fell below this level (Table 4).

Table 4: Content validity: content comparison GRS-C/GRS-UK.
3.3. Construct Validity

For site 1, 75% (95% CI; 47%–91%) of the GRS-C item ratings were the same as obtained in the corresponding reference GRS-UK items. For sites 2 and 3, 100% (95% CI; 76%–100%) and 92% (95% CI; 65%–99%), respectively, were the same as GRS-UK.

The response to GRS-C statements and corresponding outcome data are detailed in Table 5.

Table 5: Construct validity: select GRS-C statements versus auditable outcomes.
3.4. Test-Retest Reliability

Test-retest reliability ranged from .65 to .83. At site 1, it was “almost perfect agreement” (kappa = 0.83; 95% CI 0.73; 0.93), while for sites 2 and 3 agreement was “substantial” (kappa = 0.81; 95% CI 0.70; 0.92) (kappa = 0.65; 95% CI 0.51; 0.80).

3.5. Responsiveness to Change

Table 6 lists the separate initiatives that were attempted or carried out between the baseline and 6-month follow-up GRS administrations. At site 1, no differences were found comparing pre- and post-GRS responses; however, significant differences were noted for sites 2 and 3 ().

Table 6: Listed action plans.

Various improvement initiatives were undertaken in the 6-month interval between baseline and the second iteration. Site 1 created a patient information pamphlet on colonoscopy and increased the frequency of quality assurance reviews (endoscopist, adverse events, and general unit review). Site 2 created and implemented a patient satisfaction survey, translated patient related materials into French, and improved tracking of cancellation rates. Site 3 created a patient satisfaction survey, increased review of direct to procedure guidelines, and implemented reliable electronic distribution of reports to referring physicians.

4. Discussion

In this multisite study, we tested the validity and reliability of the GRS-C that is increasingly used in Canada to improve the quality of endoscopy services [12].

In assessing face validity, participants were able to correctly interpret the items of the GRS-C, despite a lack of knowledge of the tool. Three participants interpreted the “consent process” as reflecting “patient satisfaction,” perhaps because many of the statements appear to be centered around the patient’s satisfaction with how consent is obtained. The extent to which the patient opinion is captured in the GRS instrument is unclear as the GRS mainly focuses on processes in the endoscopy service delivery.

We found substantial overlap between the GRS-C and the CAG Consensus guidelines on safety and quality indicators in endoscopy. The presence and monitoring of educational programs, both for staff themselves and for GI trainees, are not explicitly addressed by the GRS-C. However, implicit in the “quality of the procedure” domain is the understanding that, to favor self-improvement, endoscopists will have to embark on continuing professional development activities. National consensus on what constitutes an effective training program would need to be more precisely defined for all endoscopy units before specific educational initiatives can be agreed upon.

Similarly, before statements regarding maintenance and revocation of privileges can be made, “maintenance of colonoscopy certification” standards may have to be agreed upon by all participating professional societies. The CAG is leading such an initiative, beginning with the hands-on Skills Enhancement in Endoscopy courses now available across the country [13].

In comparison to the GRS-UK, 10 out of the 12 GRS-C items showed greater than or equal to 70% overlap in content, while three fell below this level. The main difference between the GRS-UK and GRS-C for item 6 involves the emphasis on enforcing standardized electronic reporting. This is emphasized in the CAG Consensus guidelines, and so we believe these statements are important additions to this item. The GRS-UK includes standardized timeframe within which results of pathology reports should be acted upon once received. The GRS-C does not include this, potentially because timelines for turnaround for pathology reports themselves are currently variable from institution to institution across Canadian provinces.

There are several differences between the GRS-UK and the GRS-C for Item 7 (equality of access). The GRS-UK includes a statement discouraging family and friends from acting as interpreters. This may be an important ethical addition to the standards set out in the GRS-C. The GRS-UK also sets out a standard that communication method for all groups should be clearly and individually outlined in a policy statement. This level of policy detail is not demanded in the GRS-C. The GRS-C however does require that action plans and results thereof be regularly reviewed, such as part of a planned follow-up to annual patient surveys that will ultimately lead to a patient centred standard of care for diversity.

Undoubtedly, it would be helpful to align all items of the GRS-C with auditable outcomes to more explicitly measure the construct validity of the GRS-C. Interestingly, in statement 1.11 (availability of patient information), although the committee responded that information sheets were provided, only two-thirds of patients surveyed reported having received these. This highlights differences between the actual patient experience and the perceived patient experience as measured by the completing committee. Ongoing feedback from patients to the responsible unit managers is important to ensure that policies are consistently implemented.

For reliability testing, GRS-C completion at baseline and at 2 weeks was chosen in order to study interpretation of items and reliability of responses without having confounding from differences brought about by interval systemic change. The GRS-C proved reliable in this study; in fact reliability was almost perfect at one site and was substantial in the other two. In reviewing feedback provided by the completing committees, although filled out by the same people in all iterations, variability in interpretation of the items can explain the few inconsistent responses. For example, in statement 3.4 (unacceptable comfort levels prompt a review during the procedure…) one centre identified that most, but not all, endoscopists review indications, technique, and sedation levels when unacceptable comfort levels are reached. The question was initially interpreted as “yes” because most physicians did this, but 2 weeks later the group answered “no” as they focused on the minority of endoscopists who do not adhere to this practice. Similarly, for item 5.2 (“Surveillance and screening endoscopy is booked according to established guidelines”) certain physicians do not routinely follow these guidelines, but since the “majority” do, two sites answered “yes” to the question. Members of the staff committee were unsure if this was the “right” way to respond. This finding speaks to the need for more effective management and more uniform policy statements across all endoscopy facilities and their staff.

A partial solution to the issue of interpretation variability may be to ensure, as much as possible, that the same group completes the GRS at each cycle. Indeed, this may increase consensus on interpretation and increase reliability over time. Furthermore, the current study looked at iterations only 6 months apart.

Two out of the three sites demonstrated significant responsiveness to change at 6 months, after action plans have been implemented. Both centres had introduced the GRS-C at study inception and it may be that significant improvements could be made as a result of only a few key actions. For example, the creation and distribution of a patient satisfaction survey addressing several GRS statements improved the performance of the unit greatly. It may be that as a unit improves in service delivery, it requires more detailed interventions to continue achieving improvements in GRS-C ratings. Indeed, site 1 commented that many of the detailed action plans put in place at week 2 were still in progress 6 months later. Several interventions were planned, but responders commented at 6 months that time frames were difficult to estimate without input from all unit staff or upper management in charge of resource allocation. It would be interesting to continue this study at 6, 12, and 18 months and assess responsiveness to change within these more extended time frames. Other than the short 6-month follow-up period, a limitation of this pilot study was the relatively small number of participating sites, as well as the limited availability of outcomes data to assess construct validity (available from 1 site only).

5. Conclusion

In conclusion, this pilot study provided support for use of the GRS-C. Our findings showed satisfactory face validity, content validity, construct validity, and reliability. Responsiveness to change was demonstrated at the two endoscopy units with less experience with the GRS-C. Further studies are needed to confirm these findings.

Competing Interests

The authors declare that they have no competing interests.

Authors’ Contributions

Study concept and design were done by Stéphanie Carpentier, Maida J. Sewitch, and Alan N. Barkun. Acquisition of data was done by Stéphanie Carpentier, Maida J. Sewitch, Sara El Ouali, Myriam Martel, and Alan N. Barkun. Analysis and interpretation of data were done by Stéphanie Carpentier, Maida J. Sewitch, Sara El Ouali, Myriam Martel, and Alan N. Barkun. Drafting of the manuscript was done by Stéphanie Carpentier, Sharara N, Maida J. Sewitch, Sara El Ouali, Myriam Martel, and Alan N. Barkun. Critical revision of the manuscript for important intellectual content was done by Stéphanie Carpentier, Maida J. Sewitch, Sara El Ouali, Myriam Martel, and Alan N. Barkun. Statistical analysis was done by Stéphanie Carpentier, Maida J. Sewitch, Myriam Martel, and Alan N. Barkun. Study supervision was done by Maida J. Sewitch and Alan N. Barkun.

References

  1. D. Armstrong, A. Barkun, R. Bridges et al., “Canadian association of gastroenterology consensus guidelines on safety and quality indicators in endoscopy,” Canadian Journal of Gastroenterology, vol. 26, no. 1, pp. 17–31, 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. R. L. Barclay, J. J. Vicari, A. S. Doughty, J. F. Johanson, and R. L. Greenlaw, “Colonoscopic withdrawal times and adenoma detection during screening colonoscopy,” The New England Journal of Medicine, vol. 355, no. 24, pp. 2533–2541, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. R. H. Fletcher, M. R. Nadel, J. I. Allen et al., “The quality of colonoscopy services—responsibilities of referring clinicians: a consensus statement of the Quality Assurance Task Group, National Colorectal Cancer Roundtable,” Journal of General Internal Medicine, vol. 25, no. 11, pp. 1230–1234, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Sint Nicolaas, V. de Jonge, R. A. de Man et al., “The Global Rating Scale in clinical practice: a comprehensive quality assurance programme for endoscopy departments,” Digestive and Liver Disease, vol. 44, no. 11, pp. 919–924, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. D. J. Leddin, R. Enns, R. Hilsden et al., “Canadian Association of Gastroenterology position statement on screening individuals at average risk for developing colorectal cancer: 2010,” Canadian Journal of Gastroenterology, vol. 24, no. 12, pp. 705–714, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. V. de Jonge, J. Sint Nicolaas, E. A. Lalor et al., “A prospective audit of patient experiences in colonoscopy using the global rating scale: a cohort of 1187 patients,” Canadian Journal of Gastroenterology, vol. 24, no. 10, pp. 607–613, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Valori, J. Sint Nicolaas, and V. De Jonge, “Quality assurance of endoscopy in colorectal cancer screening,” Best Practice & Research: Clinical Gastroenterology, vol. 24, no. 4, pp. 451–464, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Sint Nicolaas, V. de Jonge, I. J. Korfage et al., “Benchmarking patient experiences in colonoscopy using the Global Rating Scale,” Endoscopy, vol. 44, no. 5, pp. 462–472, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. T. Williams, A. Ross, C. Stirling, K. Palmer, and P. S. Phull, “Validation of the Global Rating Scale for endoscopy,” Scottish Medical Journal, vol. 58, no. 1, pp. 20–21, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. D. MacIntosh, C. Dubé, R. Hollingworth, S. V. van Zanten, S. Daniels, and G. Ghattas, “The endoscopy Global Rating Scale—Canada: development and implementation of a quality improvement tool,” Canadian Journal of Gastroenterology, vol. 27, no. 2, pp. 74–82, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. Canada-GRS, http://www.mdpub.org/grs/
  12. Quality Program—Endoscopy (QP-E), 2015, https://www.cag-acg.org/quality/quality-in-gastroenterology/qp-e
  13. Skills Enhancement for Endoscopy©, 2015, http://www.cag-acg.org/skills-enhancement-for-endoscopy