Table of Contents Author Guidelines Submit a Manuscript
Advances in Preventive Medicine
Volume 2017, Article ID 9780317, 8 pages
https://doi.org/10.1155/2017/9780317
Research Article

Readability Assessment of Online Patient Education Material on Congestive Heart Failure

Geisinger Commonwealth School of Medicine, Scranton, PA, USA

Correspondence should be addressed to Akhil Kher; moc.liamg@09rehka

Received 12 January 2017; Revised 16 April 2017; Accepted 7 May 2017; Published 1 June 2017

Academic Editor: Gerardo E. Guillén Nieto

Copyright © 2017 Akhil Kher et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background. Online health information is being used more ubiquitously by the general population. However, this information typically favors only a small percentage of readers, which can result in suboptimal medical outcomes for patients. Objective. The readability of online patient education materials regarding the topic of congestive heart failure was assessed through six readability assessment tools. Methods. The search phrase “congestive heart failure” was employed into the search engine Google. Out of the first 100 websites, only 70 were included attending to compliance with selection and exclusion criteria. These were then assessed through six readability assessment tools. Results. Only 5 out of 70 websites were within the limits of the recommended sixth-grade readability level. The mean readability scores were as follows: the Flesch-Kincaid Grade Level (9.79), Gunning-Fog Score (11.95), Coleman-Liau Index (15.17), Simple Measure of Gobbledygook (SMOG) index (11.39), and the Flesch Reading Ease (48.87). Conclusion. Most of the analyzed websites were found to be above the sixth-grade readability level recommendations. Efforts need to be made to better tailor online patient education materials to the general population.

1. Introduction

The readily accessible Internet has become the most popular educational resource for the general patient population [13]. Patients have turned to consult the search engine Google for diagnoses and treatment of their own health conditions before their primary physician [4]. While having a nearly unlimited knowledge base can be empowering, the unfiltered nature of the Internet can result in patient misinformation and anxiety due to medical jargon and difficult readability of the patient education materials. Guidelines set forth by American Medical Association (AMA) and the US Department of Health and Human Services (USDHHS) dictate that patient reading material should be no higher than a fifth- or sixth-grade reading level in order to be more accessible and comprehensible to the general public [5].

Readability is defined through various formulas based on sentence length, word familiarity, syllables, and other factors via scores that identify a grade level needed to attain to comprehend the presented information. Many recently published articles show that medical websites are not pitched to the appropriate communication levels of the general public [610]. In this cross-sectional study, online patient education materials on the particular topic of congestive heart failure (CHF) were assessed by the authors. It is estimated that 5.7 million Americans suffer from CHF, and about half of those afflicted will die within five years of the diagnosis [11]. Early intervention and treatment can improve quality and length of life for most patients. Thus, it is crucial that online information pertinent to CHF be tangible to the general public in order to understand, manage, and track their condition in the appropriate manner. This study focused on assessing the readability levels and reading ease of online CHF articles available to the general public via Google.

2. Methods

2.1. Search Engine

The Google search engine was used because the majority of patients that use the Internet for health-related information reported using Google [12, 13]. The search term “congestive heart failure” was entered into a Google Chrome web browser. The search was performed on November 29, 2016.

2.2. Inclusion and Exclusion Criteria

The first 100 search results were analyzed to determine if they would be eligible for inclusion. Websites were eligible for inclusion if they (1) were in English, (2) were free to access, and (3) provided information on CHF. Websites were excluded if they were advertisements for medical products or news articles or pertained only to animal-based diseases. This caused 30 results to be excluded, leaving 70 websites to be analyzed (see Table 1 for the list of websites included).

Table 1: List of congestive heart failure websites with their Google ranking and website type.
2.3. Readability Assessment

The readability of each website was assessed using five readability formulas (Table 2).

Table 2: Readability test formulas used to analyze patient education websites.

The Flesch-Kincaid Grade Level (FKGL) and the Flesch Reading Ease (FRE) are both calculated using the average sentence length (i.e., the number of words divided by the number of sentences) and the average syllables per word (i.e., the number of syllables divided by the number of words) using different formulas [14, 15].

The Gunning-Fog Score (GFS) is calculated using the average sentence length and the number of polysyllabic words (i.e., those with three or more syllables) [16]. The counted polysyllabic words do not include (i) proper nouns, (ii) combinations of hyphenated words, or (iii) two-syllable verbs made into three with -es and -ed endings.

The Coleman-Liau Index (CLI) is calculated using the average number of letters per 100 words and the average sentence length [17]. Unlike the other four readability tests, the CLI does not assess the number of syllables in a given text.

The Simple Measure of Gobbledygook (SMOG) index is calculated using the number of polysyllabic words in three ten-sentence samples near the beginning, middle, and end of a piece of text [18]. If there are fewer than 30 sentences, the formula contains a factor to correct for this.

The five readability tests chosen have been widely used in a variety of previous studies. Each test assesses readability according to word difficulty and sentence length using different weighting factors. Five different readability tests were used in order to compare the readability of each website based upon different factors. The FRE is a 100-point scale with higher scores indicating more easily understood text (Table 3). The remaining four measures, FKGL, GFS, CLI, and SMOG, indicate the US academic grade level (number of years of education) necessary to comprehend the written material. For example, a score of 13.5 would indicate a grade level appropriate for a first year undergraduate student, while 6 would indicate that the available health information can be comprehended by an individual who is in or has completed the sixth grade. To prevent human error during calculations and for ease of use, a single online readability calculator recommended by the National Institutes of Health (NIH) was used for all five readability tests [19, 20].

Table 3: Flesch reading ease scores with equivalent US education level and USDHHS readability rating.

Prior to analyzing the data, the “ideal” criteria for the readability of the online resources were established. The USDHHS recommends health materials to be written at the 5th- or 6th-grade level to ensure wide understanding. Thus, the level of acceptable readability was determined to be greater than or equal to 80.0 for the FRE and less than or equal to 6.9 for the FKGL, GFS, CLI, and SMOG.

2.4. Statistical Analysis

Standard data entry and analysis were done using a Microsoft Excel spreadsheet. Independent upper-tailed hypothesis tests were conducted for each readability index. Results were considered statistically significant at a value of 0.05 or less.

3. Results

Of the 100 websites identified, only 70 met the study inclusion criteria and were analyzed for readability. Thirty websites were excluded because they did not describe CHF (14), pertained to animal-based diseases (7), were advertisements (4), required payment (3), or were news articles (2).

All five assessment tools reported statistically significant results in which value was less than the standard alpha value of 0.05. The distribution of each readability score for all of the websites evaluated is summarized in Table 4. A comparison of mean readability scores between general medical websites and specialty-specific websites is shown in Table 5. Of the 70 websites, only 5 (7.1%) of them were within the limits of the recommended sixth-grade reading level on at least one assessment tool (Table 6). No websites were at or under the sixth-grade reading level using all five assessment tools. Figure 1 details the median scores of the health information websites using a box-and-whisker plot.

Table 4: Mean readability scores of health information websites.
Table 5: Mean readability scores of general medical websites as compared with specialty websites.
Table 6: Category breakdown of readability scores of health information websites.
Figure 1: Box-and-whisker plots showing median readability scores of health information websites (; ).

4. Discussion

Internet access has opened up a plethora of resources to use as education materials but the writing style and jargon of most medically relevant articles favor a small percentage of the general public. In order to prevent confusion, undue stress, and misinformation, it is important for patients to have adequate and appropriate medical information available to them in all healthcare settings. However, the material presented online cannot be utilized effectively if it is presented in a style that is beyond the scope of the general population. One study showed that about 1 in 5 patients has utilized the Internet for obtaining medical information, however, the majority of them encountered difficulties comprehending the information available to them [21]. The guidelines set forth by the AMA and USDHHS state that the information must be written at or below a sixth-grade reading level in order to be accessible to the public. The aim of this study is elucidate the readability of available online health-related information in terms of these standard guidelines. This study focused specifically on congestive heart failure.

4.1. Online Health Information Readability

The search engine Google was used to assess the readability of websites relevant to congestive heart failure. Of the first 100 search results, 70 fit the inclusion criteria of this study and were consequently analyzed via five readability assessment tools. As the results indicate, 92.9% of the CHF websites assessed were above the recommended levels. As Table 5 indicates, there was no significant difference in the readability scores of general medical websites as compared to specialty-specific websites. Only five websites fell within the recommended levels but none of them passed all five assessment tools, further portraying that medical articles found online are not written at an appropriate level for the average US citizen. This was seen across both general medical websites and diagnosis-specific websites.

In the case of congestive heart failure, it is crucial for patients to understand, manage, and track their health condition to improve their quality and longevity of life. To our knowledge, this is one of few studies to use these five readability assessment tools to assess the readability of online patient education information relating specifically to congestive heart failure. The readability of web-based literature has been assessed in many healthcare arenas such as colorectal surgery, ophthalmology, dermatology, nephrology, orthopedics, psychiatry, and endocrinology [5, 2227].

Hutchinson et al. used four of the five readability indices used in our study on websites that were also included and assessed in our study [28]. According to the prior study, the average readability of Wikipedia.org, MayoClinic.org, WebMD.com, Medicine.net, and NIH.gov on the disease-specific topic of congestive heart failure was found to be above the recommended sixth-grade reading level. This was consistent with the findings in our study when the average of all five of the readability assessment tools is taken. In both studies, the NIH website had the lowest average reading grade level of the five websites. Tulbert et al. also analyzed Wikipedia.org and WebMD.com using the FKGL and FRE readability tests and likewise found that both websites were above the sixth-grade reading level [23].

Another study analyzed a broad spectrum of websites relating to 16 medical specialties using ten readability indices, including the five used in this study [10]. According to that study, the American Academy of Family Physicians (AAFP) website was written above the recommended sixth-grade readability level using the FKGL, FRE, GFS, CLI, and SMOG. Our study also utilized the AAFP website for the more disease-specific topic of congestive heart failure and found that it was written above the recommended reading level in all five tests.

4.2. Health Literacy

Patient education is an integral part of the physician-patient relationship. However, a majority of US citizens are known to have limited health literacy [29, 30]. There have been several efforts undertaken to provide a profile of health literacy skills of specific patient populations that have found striking evidence of inadequate literacy skills, including in medical care settings [31].

Low health literacy has been found to negatively impact health and well-being. Low health literacy rates have been correlated with higher mortality in the elderly and impose a higher risk of living with chronic illnesses [29]. On an individual level, this results in preventable recurrent hospitalizations or clinic visits. On the national level, it has been found that inadequate health literacy costs the US economy between $106 and $236 billion dollars annually. Health literacy has long been recognized as one of the central challenges we all face in American healthcare [32]. We can take the necessary steps in mitigating this issue by following recommendations to improve the readability of online health information.

4.3. Recommendations

It is important to adhere to the AMA and USDHHS guidelines in keeping patient education websites at a sixth-grade readability level or below in order to broaden the patient base that the information can reach. By doing so, more patients will be able to find the appropriate information on websites, be able to understand what they found, and be able to act appropriately on that understanding [33].

According to the NIH, this is particularly important for the first few lines of text because if the reader encounters difficulty with a passage at the beginning, they may stop reading altogether [19]. The Institute for Healthcare Improvement recommends using simpler words, shorter sentences, and avoiding medical jargon, all of which will also serve to improve website readability scores [34]. Websites with scores that do not adhere to the AMA and USDHHS guidelines should consider rewriting materials to aim for a grade level less than or equal to 6.9 and FRE scores below 60. These changes should ensure a simple, quick, and cost-effective effort to make a definitive change that will improve the readability of available online health-related information for the general public.

4.4. Limitations

This study has a few important limitations. Only the first 100 results were reviewed in this study and only with one search phrase. This is only a portion of all the available websites on our topic of interest, although we found that using related search terms resulted in very similar results. In this study, we used only one search engine for retrieving information; however, the search engine we used is the most widely used engine globally for obtaining health-related information [7, 35].

The scope of this study is focused particularly on US English-speaking patient populations. Websites were excluded if they were non-English, and thus the results may not be applicable to a non-English-speaking patient population. The results obtained may be location-specific because search results will vary based on the server used; thus it is difficult to draw more general conclusions about the entire global patient population. Additionally, the readability indices that were utilized were originally created to gauge the readability of English texts using US grade levels. However, the authors of this study acknowledge that the data results may be extrapolated and applied to English-speaking patient populations outside the US, in which the readability grade level can be considered the number of years of formal education conducted in English.

As a cross-sectional study, we acknowledge that our search results reflect a snapshot in time from a single location and does not represent or account for every patient’s search experience. The available resources on the Internet are always growing and changing, and thus search results retrieved at different moments in time may differ. The results of our study are meant to instigate reflection, to initiate efforts by website authors to improve readability, and to serve as a comparison point for reassessment in the future.

5. Conclusion

This study showed that the current readability of websites pertaining to congestive heart failure was poor. The Internet has become a powerful, accessible resource for many patients to use for their own medical management and comprehension. However, many patient education websites pose material at a reading level that is not suitable for the average adult, causing readability as well as comprehension to suffer. Poor health literacy has been found to negatively impact health and even inflate healthcare costs in the United States. It is therefore imperative to scrutinize the Internet resources available to the general populace in order to prevent mismanagement and subpar healthcare outcomes. We have highlighted easy, cost-effective methods such as using shorter sentences and limiting medical jargon in order to better achieve the recommended readability level. It is our recommendation that patient education websites be reevaluated for adherence to readability guidelines set forth by the NIH and USDHHS in order to ensure that resources are inclusive to a wider audience. Through this study, we hope to make website creators aware of the utility of readability indices to achieve this goal.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. B. W. Hesse, D. E. Nelson, G. L. Kreps et al., “Trust and sources of health information: the impact of the internet and its implications for health care providers: findings from the first health information national trends survey,” Archives of Internal Medicine, vol. 165, no. 22, pp. 2618–2624, 2005. View at Publisher · View at Google Scholar · View at Scopus
  2. S. Fox and L. Rainie, “The online health care revolution: how the Web helps Americans take better care of themselves,” Pew Internet & American Life Project, pp. 1–23, 2000. View at Google Scholar
  3. J. F. Baker, B. M. Devitt, P. D. Kiely et al., “Prevalence of Internet use amongst an elective spinal surgery outpatient population,” European Spine Journal, vol. 19, no. 10, pp. 1776–1779, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. H. S. Wald, C. E. Dube, and D. C. Anthony, “Untangling the web-the impact of internet use on health care and the physician-patient relationship,” Patient Education and Counseling, vol. 68, no. 3, pp. 218–224, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. M. R. Edmunds, R. J. Barry, and A. K. Denniston, “Readability assessment of online ophthalmic patient information,” JAMA Ophthalmology, vol. 131, no. 12, pp. 1610–1616, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Raj, V. L. Sharma, A. J. Singh, and S. Goel, “Evaluation of Quality and readability of health information websites identified through india’s major search engines,” Advances in Preventive Medicine, vol. 2016, Article ID 4815285, 6 pages, 2016. View at Publisher · View at Google Scholar
  7. M. Memon, L. Ginsberg, N. Simunovic, B. Ristevski, M. Bhandari, and Y. V. Kleinlugtenbelt, “Quality of web-based information for the 10 most common fractures,” interactive Journal of Medical Research, vol. 5, no. 2, p. e19, 2016. View at Publisher · View at Google Scholar
  8. P. Grewal and S. Alagaratnam, “The quality and readability of colorectal cancer information on the internet,” International Journal of Surgery, vol. 11, no. 5, pp. 410–413, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. C. J. Keogh, S. M. McHugh, M. Clarke Moloney et al., “Assessing the quality of online information for patients with carotid disease,” International Journal of Surgery, vol. 12, no. 3, pp. 205–208, 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Agarwal, D. R. Hansberry, V. Sabourin, K. L. Tomei, and C. J. Prestigiacomo, “A comparative analysis of the quality of patient education materials from medical specialties,” JAMA Internal Medicine, vol. 173, no. 13, pp. 1257–1259, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. D. Mozaffarian, E. J. Benjamin, A. S. Go et al., “Heart disease and stroke statistics—2016 update,” Circulation, vol. 133, no. 4, pp. e338–e360, 2016. View at Google Scholar
  12. K. Purcell, J. Brenner, and L. Rainie, Search Engine Use, Pew Research Center, 2012.
  13. M. Aitken, T. Altmann, and D. Rosen, Engaging Patients through Social Media.
  14. R. Flesch, “A new readability yardstick,” Journal of Applied Psychology, vol. 32, no. 3, pp. 221–233, 1948. View at Publisher · View at Google Scholar · View at Scopus
  15. J. P. Kincaid, R. P. Fishburne Jr., R. L. Rogers, and B. S. Chissom, “Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel,” Research Branch Report 56, Institute for Simulation and Training, 1975, http://stars.library.ucf.edu/istlibrary/56. View at Google Scholar
  16. R. Gunning, The Technique of Clear Writing, 36-37 (1952).
  17. M. Coleman and T. L. Liau, “A computer readability formula designed for machine scoring,” Journal of Applied Psychology, vol. 60, no. 2, pp. 283-284, 1975. View at Publisher · View at Google Scholar · View at Scopus
  18. G. H. McLaughlin, “SMOG grading: a new readability formula,” Journal of Reading, vol. 12, no. 8, pp. 639–646, 1969. View at Google Scholar
  19. National Institutes of Health, “How to Write Easy-to-Read Health Materials [Internet],” Available from: https://medlineplus.gov/etr.html.
  20. Readability-Score.com [Internet]. Available from: https://readability-score.com/.
  21. M. Murero, G. D'Ancona, and H. Karamanoukian, “Use of the Internet by patients before and after cardiac surgery: telephone survey,” Journal of Medical Internet Research, vol. 3, no. 3, article E27, 2001. View at Publisher · View at Google Scholar · View at Scopus
  22. J. M. U. Jayaweera and M. I. M. De Zoysa, “Quality of information available over internet on laparoscopic cholecystectomy,” Journal of Minimal Access Surgery, vol. 12, no. 4, pp. 321–324, 2016. View at Publisher · View at Google Scholar
  23. B. H. Tulbert, C. W. Snyder, and R. T. Brodell, “Readability of patient-oriented online dermatology resources,” Journal of Clinical and Aesthetic Dermatology, vol. 4, no. 3, 2011. View at Google Scholar · View at Scopus
  24. E. M. Moody, K. K. Clemens, L. Storsley, A. Waterman, C. R. Parikh, and A. X. Garg, “Improving on-line information for potential living kidney donors,” Kidney International, vol. 71, no. 10, pp. 1062–1070, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. F. Küçükdurmaz, M. M. Gomez, E. Secrist, and J. Parvizi, “Reliability, readability and quality of online information about femoracetabular impingement,” Archives of Bone and Joint Surgery, vol. 3, no. 3, pp. 163–168, 2015. View at Google Scholar · View at Scopus
  26. H. Klil, A. Chatton, A. Zermatten, R. Khan, M. Preisig, and Y. Khazaal, “Quality of web-based information on obsessive compulsive disorder,” Neuropsychiatric Disease and Treatment, vol. 9, pp. 1717–1723, 2013. View at Publisher · View at Google Scholar · View at Scopus
  27. M. R. Edmunds, A. K. Denniston, K. Boelaert, J. A. Franklyn, and O. M. Durrani, “Patient information in graves’ disease and thyroid-associated ophthalmopathy: readability assessment of online resources,” Thyroid, vol. 24, no. 1, pp. 67–72, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. N. Hutchinson, G. L. Baird, and M. Garg, “Examining the reading level of internet medical information for common internal medicine diagnoses,” American Journal of Medicine, vol. 129, no. 6, pp. 637–639, 2016. View at Publisher · View at Google Scholar · View at Scopus
  29. N. D. Berkman, D. A. Dewalt, M. P. Pignone et al., “Literacy and health outcomes,” Evidence Report/Technology Assessment, vol. 87, pp. 1–8, 2004. View at Google Scholar
  30. M. Kutner, E. Greenberg, Y. Jin, and C. Paulsen, The Health Literacy of America’s Adults: Results from the 2003 National Assessment of Adult Literacy, 2003.
  31. R. E. Rudd, B. A. Moeykens, and T. C. Colton, Health and literacy: a review of medical and public health literature.
  32. R. H. Carmona, “Health literacy: a national priority,” Journal of General Internal Medicine, vol. 21, no. 8, p. 803, 2006. View at Publisher · View at Google Scholar · View at Scopus
  33. World Health Organization, “Health Literacy and Health Behaviour,” WHO, 2010. View at Google Scholar
  34. D. Seubert, Health Communications Toolkits: Improving Readability of Patient Education Materials [Internet], Available from: www.ihi.org/resources/Pages/Tools/HealthCommunicationsToolkitsImprovingReadabilityPtEdMaterials.aspx.
  35. D. M. Dalton, E. G. Kelly, and D. C. Molony, “Availability of accessible and high-quality information on the Internet for patients regarding the diagnosis and management of rotator cuff tears,” Journal of Shoulder and Elbow Surgery, vol. 24, no. 5, pp. e135–e140, 2015. View at Publisher · View at Google Scholar · View at Scopus