About this Journal Submit a Manuscript Table of Contents
BioMed Research International

Volume 2014 (2014), Article ID 458678, 11 pages

http://dx.doi.org/10.1155/2014/458678
Review Article

Psychometric Properties of Questionnaires on Functional Health Status in Oropharyngeal Dysphagia: A Systematic Literature Review

1School of Public Health, Tropical Medicine and Rehabilitation Sciences, James Cook University, Townsville, QLD 4811, Australia

2Department of Otorhinolaryngology and Head and Neck Surgery, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands

3School of Occupational Therapy and Social Work, Curtin University, Perth, WA 6845, Australia

4RehaA Wil and RehaA Winterthur, RehaClinic, 5330 Bad Zurzach, Switzerland

Received 31 January 2014; Accepted 31 March 2014; Published 29 April 2014

Academic Editor: Nam-Jong Paik

Copyright © 2014 Renée Speyer et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Introduction. Questionnaires on Functional Health Status (FHS) are part of the assessment of oropharyngeal dysphagia. Objective. To conduct a systematic review of the literature on the psychometric properties of English-language FHS questionnaires in adults with oropharyngeal dysphagia. Methods. A systematic search was performed using the electronic databases Pubmed and Embase. The psychometric properties of the questionnaires were determined based on the COSMIN taxonomy of measurement properties and definitions for health-related patient-reported outcomes and the COSMIN checklist using preset psychometric criteria. Results. Three questionnaires were included: the Eating Assessment Tool (EAT-10), the Swallowing Outcome after Laryngectomy (SOAL), and the Self-report Symptom Inventory. The Sydney Swallow Questionnaire (SSQ) proved to be identical to the Modified Self-report Symptom Inventory. All FHS questionnaires obtained poor overall methodological quality scores for most measurement properties. Conclusions. The retrieved FHS questionnaires need psychometric reevaluation; if the overall methodological quality shows satisfactory improvement on most measurement properties, the use of the questionnaires in daily clinic and research can be justified. However, in case of insufficient validity and/or reliability scores, new FHS questionnaires need to be developed using and reporting on preestablished psychometric criteria as recommended in literature.

1. Introduction

Oropharyngeal dysphagia is associated with high mortality rates [1]. Dysphagia can lead to increased risk of dehydration, malnutrition, aspiration pneumonia, and death. Oropharyngeal dysphagia may also have a major impact on a patient’s health-related quality of life and well-being [24]. Early detection through screening is an essential first step in the management of dysphagia [5, 6]. After being identified as being at risk of having dysphagia, further assessment of the swallowing function is required. Videofluoroscopy (VFS) and fiberoptic endoscopic evaluation of swallowing (FEES) are mooted in literature to be the gold standards in the assessment of dysphagia. Another important step after screening is the completion of patient self-administered questionnaires. Such inventories are designed to measure either health-related quality of life (HR-QoL) or functional health status (FHS) [6]. HR-QoL refers to the unique personal perception individuals may have of their health, taking into account social, functional, and psychological issues, whereas FHS is the influence of a given disease on particular functional aspects [7]. Within the context of oropharyngeal dysphagia assessment, FHS questionnaires aim to quantify the symptomatic severity of dysphagia as experienced by the patient.

The use of a particular tool to evaluate a patient’s current health status or the effects of a medical intervention, whether for screening or assessment of oropharyngeal dysphagia, can only be justified if it has demonstrated reliability and validity. Systematic literature reviews have been published on the psychometric properties of oropharyngeal dysphagia screening [8, 9] and HR-QoL questionnaires [10] but not on FHS questionnaires.

The purpose of this systematic literature review is to (a) provide an overview of existing FHS questionnaires, (b) determine the corresponding psychometric properties, and (c) provide recommendations for the use of FHS questionnaires in both clinical practice and in research.

2. Methods

A systematic literature search was performed by two independent reviewers using two electronic databases: Pubmed and Embase. All appropriate journal articles up to June 2013 were included. To ensure that a comprehensive approach was adopted to retrieve relevant publications, mesh and thesaurus terms were supplemented by free text words (see Table 1). Only original research articles describing FHS questionnaires in oropharyngeal dysphagia were included. The search was limited to publications and questionnaires written in English. Reviews, case reports, and editorials were excluded, as well as questionnaires not related to oropharyngeal dysphagia (e.g., esophageal dysphagia or gastroesophageal reflux disease) or questionnaires mainly focusing on health-related quality of life (HR-QoL), generic questionnaires, or questionnaires targeted at population groups other than adults with oropharyngeal dysphagia (e.g., children or health care providers). Reference lists of all included articles were searched for additional literature. Differences of opinion about the inclusion of articles were settled by group discussion and reaching consensus.

tab1
Table 1: Functional health status questionnaires in oropharyngeal dysphagia: search strategy.

Next, an extended search was conducted for each of the included questionnaires to ensure that all articles on their development and validation were retrieved using the names of each questionnaire in combination with their respective acronyms. The psychometric properties of the included questionnaires were determined using the COSMIN taxonomy of measurement properties and definitions for health-related patient-reported outcomes [11] (see Table 2). The COSMIN checklist [12] was used as a standardised tool to evaluate the methodological quality of studies on psychometric properties. The COSMIN checklist consists of nine domains, each dealing with one of the following psychometric properties: internal consistency, reliability (relative measures: including test-retest reliability, interrater reliability, and intrarater reliability), measurement error (absolute measures), content validity (including face validity), structural validity, hypotheses testing, cross-cultural validity, criterion validity, and responsiveness. Interpretability is not considered to be a psychometric property. Each domain of the COSMIN checklist contains 5 to 18 items on aspects of study design and statistical methods. The methodological quality scores per psychometric property were calculated using a 4-point rating scale according to Terwee et al. [13]: excellent, good, fair, and poor. An overall methodological quality score per psychometric property is obtained by taking the lowest rating of any item in the corresponding domain. Psychometric ratings were discussed and agreed upon during consensus meetings. If applicable, evidence from different studies on the psychometric properties of the same questionnaire was summarised by combining the results as proposed by the Cochrane Back Review Group [14].

tab2
Table 2: COSMIN: definitions of psychometric domains and properties for Health-Related Patient-Reported Outcomes (HR-PRO).

3. Results

3.1. Systematic Literature Search

The findings of the literature search using both Pubmed and Embase resulted in a total of 2,703 abstracts. Twelve original questionnaires were identified (see Table 3). Of those, two questionnaires were excluded because they contained mainly items on health-related quality of life: the Dysphagia Handicap Index by Silbergleit et al. [15] and the MD Anderson Dysphagia Inventory by Chen et al. [16]. Four questionnaires were excluded because they were developed in a language other than English: the French Deglutition Handicap index [17], the Dysphagia Short Questionnaire [18], the Dysphagia in Multiple Sclerosis questionnaire [19], and the Swallowing Disturbance Questionnaire [20] were developed in French, Swedish, Italian, and Hebrew, respectively. The target population of the Mayo Dysphagia Questionnaire-30 [21] consisted of patients with reflux related disorders and was therefore excluded. Similarly the Dysphagia Disorders Survey [22], a questionnaire used by speech pathologists during mealtime observation of residential populations with intellectual disabilities, was excluded as well as the Caregiver Mealtime and Dysphagia Questionnaire [23], a questionnaire that focused on caregiver compliance.

tab3
Table 3: Overview of functional health status questionnaires: reasons for inclusion and exclusion.

Finally, three self-administered questionnaires were included: the Eating Assessment Tool (EAT-10) [24], the Swallowing Outcome after Laryngectomy (SOAL) [25], and the Self-report Symptom Inventory [26]. The Sydney Swallow Questionnaire (SSQ) [27] is identical to the previously published Self-report Symptom Inventory by Wallace et al. [26]. All three questionnaires represent original English-language FHS Questionnaires for adult patients with oropharyngeal dysphagia.

3.2. Functional Health Status Questionnaires

Table 4 provides information on the development of the EAT-10, the SOAL, and the Self-report Symptom Inventory. Initially, a Prototype Self-report Symptom Inventory was developed by Wallace et al. [26]. During the validation and reliability process of this prototype, the final version or the Modified Self-report Symptom Inventory was created. As the SSQ [27] is identical to the Modified Self-report Symptom Inventory, the SSQ is subsumed under the Modified Self-report Symptom Inventory. Table 5 gives an overview of the studies that were involved in the validation of the questionnaires. Both Tables 4 and 5 list the questionnaires included, the developmental and/or validation studies, the applied study designs, the study populations involved, and the subject characteristics of the target population.

tab4
Table 4: Description of studies for the development of questionnaires for the assessment of FHS in oropharyngeal dysphagia.
tab5
Table 5: Description of validation studies related to questionnaires for the assessment of FHS in oropharyngeal dysphagia.

Finally, Table 6 includes the characteristics of all three FHS questionnaires. All questionnaires contain one domain with the exception of the SSQ. Although the Modified Self-report Symptom Inventory is identical to the SSQ, Dwivedi et al. [27] distinguish the domain of physiological swallow function from two separate items: one item on overall swallowing function and another on swallowing-related quality of life (HR-QoL). Upon closer inspection, the items of the other questionnaires also included similar questions on HR-QoL. For example, the EAT-10 items, “The pleasure of eating is affected by my swallowing” and “Swallowing is stressful” could be considered to be HR-QoL questions rather than FHS questions. A similar observation could be made in the case of the SOAL-item, “Has your enjoyment of food reduced?.” However, as the majority of items of all the questionnaires focus on FHS, the influence of a few HR-QoL items was considered to be unimportant.

tab6
Table 6: Characteristics of questionnaires for the assessment of FHS in oropharyngeal dysphagia.

The number of items varies between 10 and 19 items per questionnaire. The EAT-10 includes ten items using 5-point Likert scales (from “no problem” to “severe problem”), whereas the SOAL consists of 17 items using three response options: “no,” “a little,” or “a lot.” Both the Self-report Symptom Inventory and the SSQ consist of mainly visual analogue scales. The lowest score for all questionnaires is zero (last impaired), whereas the highest possible scores range between 34 (SOAL) and 1708 (Prototype Self-report Symptom Inventory).

3.3. Psychometric Properties

The psychometric properties of all three FHS questionnaires were examined using the COSMIN taxonomy of measurement properties and definitions for health-related patient-reported outcomes [11]. Using the COSMIN checklist [12] and the 4-point rating scale according to Terwee et al. [13], overall scores of methodological quality for each measurement domain were obtained. The cross-cultural validity domain was not evaluated as only original English-language questionnaires were included in the systematic literature review. The summarised psychometric consensus ratings of all questionnaires are depicted in Table 7. All statements on the rating of the methodological quality per measurement domain of each questionnaire in the next few paragraphs refer to the “worse score counts” criteria as described by Terwee et al. [13].

tab7
Table 7: Overview of the psychometric properties of FHS questionnaires in oropharyngeal dysphagia [1113].
3.3.1. EAT-10 [24]

No factor analysis was performed to determine internal consistency. As Belafsky et al. [24] are the first to report on the EAT-10, no reference was provided to another study that would provide this information. Reliability scored poorly as no weighted or unweighted Kappa and no percentage agreement information were reported. Pearson product moment correlations were calculated instead of ICCs. Therefore, the authors did not account for possible systematic differences in their data. Because no Standard Error of Measurement (SEM) was determined, measurement error scored poorly. No reference to age, gender, disease characteristics, country, or setting was considered during item selection. No evaluation was conducted to determine if all ten items reflected the construct (dysphagia). As a result, content validity scored poorly. No information was reported on structural validity. No information was provided on describing the constructs or measurement properties of the comparator instruments resulting in a poor rating on hypotheses testing. Criterion validity was not assessed. In relation to responsiveness, no information was provided on the description of the constructs or measurement properties of the comparator instruments. The criterion used could not be considered an adequate gold standard. No information was available on sensitivity or specificity, and thus responsiveness scored poorly. Although interpretability is not considered a psychometric property, some comments can be made. No floor and ceiling effects were described. No Minimal Important Change (MIC) or Minimal Important Difference (MID) was calculated.

3.3.2. SOAL [25]

Internal consistency scored poorly for similar reasons as the previous questionnaire: no factor analysis was performed and no other studies were available that provided this information, as Govender et al. [25] were the first to report on the SOAL. Reliability was not assessed. No SEM was calculated resulting in poor rating of measurement error. Again, as no reference to age, gender, disease characteristics, country, or setting was considered during item selection nor was any evaluation conducted to determine if all 17 items reflected the construct (dysphagia), and content validity received a poor rating. No information on structural validity was reported. The sample size was considered small (less than 30 subjects per analysis), thus resulting in a poor rating of hypotheses testing. When considering criterion validity, the authors used Pearson correlation coefficients instead of Spearman’s Rho for correlations between ordinal data. Furthermore, it was not clear how missing responses to items were handled. Therefore, criterion validity scored fair. Responsiveness on the other hand scored poorly because no longitudinal design was used. As far as interpretability was concerned, no floor and ceiling effects, MIC, or MID were calculated.

3.3.3. Self-Report Symptom Inventory/SSQ [2629]

When determining internal consistency, Wallace et al. [26] used a moderate sample size but presented only a Pearson’s product moment correlation matrix. No Cronbach’s alphas were calculated. Thus internal consistency scored fair. In determining reliability and measurement error, small sample sizes were used (less than 30 per analysis). The percentage agreement was calculated but no weighted Kappa calculations were reported; SEM data were also missing. Both reliability and measurement error scored poorly. Content validity received a fair rating because the authors did not assess if all items were relevant to the purpose of the application of the questionnaire. In terms of structural validity, it was unclear how missing items were handled, resulting in a fair rating. Because no information was provided on the description of the constructs or measurement properties of the comparator instruments, hypothesis testing was considered poor. Information on criterion validity was not reported. In determining responsiveness, a moderated sample size ( ) was used, but no information was provided on how missing data were handled. Responsiveness was rated fair. Again, no floor and ceiling effects, MIC, or MID was reported (interpretability).

Dwivedi et al. [27] did not calculate internal consistency, but referred to another study in which factor analysis was performed, but not in a similar study population. Internal consistency was rated fair. The authors used a moderate sample size ( ) when determining reliability. Spearman correlation coefficients were calculated without providing evidence that no systematic change had occurred or with evidence that systematic change did occur. It was unclear whether the patients were stable. Reliability was considered to be fair. Measurement error was rated poorly as no SEM was calculated. Content validity, structural validity, or hypotheses testing was not assessed. Criterion validity was rated as fair, although it was unclear whether the criterion used could be considered an adequate “gold standard.” Responsiveness was not evaluated. Similar remarks as stated before regarding interpretability: no floor and ceiling effects, MIC, or MID was calculated.

Finally, two studies need to be mentioned briefly although their information on psychometric properties of the Self-report Symptom Inventory or SSQ is very limited. Dwivedi et al. [28] did not evaluate any psychometric properties, but calculated change scores (i.e., means and standard deviations) for relevant (sub)groups (e.g., for normative groups and subgroups of patients). Such information fits under interpretability, but again, no information on floor and ceiling effects, MIC, or MID was presented. Manjaly et al. [29] considered responsiveness using a small sample size ( ). No correlations were calculated nor did they use a criterion. Responsiveness was rated as poor. As in all previous studies, no floor and ceiling effects, MIC, and MID were calculated.

4. Discussion

When considering the restricted number of published FHS questionnaires available (Table 3) and the overall poor ratings on their psychometric properties (Table 7), it is evident that more research is needed in the area of FHS in oropharyngeal dysphagia. First, frequently authors did not evaluate all psychometric properties as defined by Mokkink et al. [11] resulting in missing data (“NR”). Secondly, when assessing the psychometric properties, authors seldom met the criteria as described in the 4-point rating scale [13]. Most studies simply failed to meet the “worse score counts” criteria. It seems that even though Terwee et al. [13] and Mokkink et al. [11] specialise in the evaluation of psychometric qualities of health-related questionnaires, their rating system appears to be so severe that it is unable to differentiate between the more subtle psychometric qualities of instruments. Although all the FHS questionnaires lacked sufficient validation, a need to distinguish between all the “poor” ratings seems desirable.

In general, most FHS questionnaires reported on in this study received poor overall methodological quality scores per measurement domain. However, when reevaluating the reliability and validity of these questionnaires according to preset quality criteria on psychometrics, it is possible that the methodological outcome per measurement property may show significant positive changes. If the overall methodological quality shows satisfactory improvement on most measurement properties, the use of the questionnaires in daily clinic and research can be justified. Conversely, without satisfactory improvement on measurement properties, new FHS questionnaires need to be developed using and reporting on preestablished psychometric criteria as recommended in the literature.

5. Conclusions

(i)A systematic literature search retrieved three original English-language FHS questionnaires: the Eating Assessment Tool (EAT-10) [24], the Swallowing Outcome After Laryngectomy (SOAL) [25], and the Self-report Symptom Inventory [26]. The Sydney Swallow Questionnaire (SSQ) [27] is identical to the previously published Self-report Symptom Inventory by Wallace et al. [26].(ii)The psychometric properties of all three FHS questionnaires were determined using the COSMIN taxonomy of measurement properties and definitions for health-related patient-reported outcomes [11], the COSMIN checklist [12], and the psychometric criteria using a 4-point rating scale according to Terwee et al. [13]; all three FHS questionnaires obtained poor overall methodological quality scores for most psychometric properties.(iii)All FHS questionnaires need psychometric reassessment; if the overall methodological quality shows satisfactory improvement on most measurement domains, the use of the questionnaires in daily clinic and research can be justified. However, in cases of insufficient validity and/or reliability scores, it is recommended to develop new FHS questionnaires using and reporting on preestablished psychometric criteria as suggested in literature.(iv)In general when assessing the validity and reliability of FHS or health-related questionnaires, researchers must use preestablished quality criteria like Terwee et al. [13] when reporting on psychometric properties of their instrument.(v)Arguably the most important conclusion may be that academics should be educated on the psychometric domains that require reporting when developing and validating a FHS questionnaire or any other health-related questionnaire.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. J. C. Sharma, S. Fletcher, M. Vassallo, I. Ross, and K. Mill, “What influences outcome of stroke—Pyrexia or dysphagia?” International Journal of Clinical Practice, vol. 55, no. 1, pp. 17–20, 2001. View at Scopus
  2. O. Ekberg, S. Hamdy, V. Woisard, A. Wuttge-Hannig, and P. Ortega, “Social and psychological burden of dysphagia: its impact on diagnosis and treatment,” Dysphagia, vol. 17, no. 2, pp. 139–146, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. R. Martino, N. Foley, S. Bhogal, N. Diamant, M. Speechley, and R. Teasell, “Dysphagia after stroke: incidence, diagnosis, and pulmonary complications,” Stroke, vol. 36, no. 12, pp. 2756–2763, 2005. View at Publisher · View at Google Scholar · View at Scopus
  4. C. A. McHorney, J. Robbins, K. Lomax et al., “The SWAL-QOL and SWAL-CARE outcomes tool for oropharyngeal dysphagia in adults: III. Documentation of reliability and validity,” Dysphagia, vol. 17, no. 2, pp. 97–114, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. L. Perry and C. P. Love, “Screening for dysphagia and aspiration in acute stroke: a systematic review,” Dysphagia, vol. 16, no. 1, pp. 7–18, 2001. View at Publisher · View at Google Scholar · View at Scopus
  6. R. Speyer, “Oropharyngeal dysphagia: screening and assessment,” Otolaryngologic Clinics of North America, vol. 46, pp. 989–1008, 2013. View at Publisher · View at Google Scholar
  7. C. E. Ferrans, J. J. Zerwic, J. E. Wilbur, and J. L. Larson, “Conceptual model of health-related quality of life,” Journal of Nursing Scholarship, vol. 37, no. 4, pp. 336–342, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. G. J. J. W. Bours, R. Speyer, J. Lemmens, M. Limburg, and R. De Wit, “Bedside screening tests vs. videofluoroscopy or fibreoptic endoscopic evaluation of swallowing to detect dysphagia in patients with neurological disorders: systematic review,” Journal of Advanced Nursing, vol. 65, no. 3, pp. 477–493, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. B. Kertscher, R. Speyer, M. Palmieri, and C. Plant, “Bedside screening to detect oropharyngeal dysphagia in patients with neurological disorders: an updated systematic review,” Dysphagia, vol. 29, no. 2, pp. 204–212, 2014. View at Publisher · View at Google Scholar
  10. A. A. Timmerman, R. Speyer, B. J. Heijnen, and I. R. Klijn-Zwijnenberg, “Psychometric characteristics of health-related quality of life questionnaires in oropharyngeal dysphagia,” Dysphagia, vol. 29, no. 2, pp. 183–198, 2014. View at Publisher · View at Google Scholar
  11. L. B. Mokkink, C. B. Terwee, D. L. Patrick et al., “The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes,” Journal of Clinical Epidemiology, vol. 63, no. 7, pp. 737–745, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. L. B. Mokkink, C. B. Terwee, D. L. Patrick et al., “The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study,” Quality of Life Research, vol. 19, no. 4, pp. 539–549, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. C. B. Terwee, L. B. Mokkink, D. L. Knol, R. W. J. G. Ostelo, L. M. Bouter, and H. C. W. de Vet, “Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist,” Quality of Life Research, vol. 21, pp. 651–657, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Van Tulder, A. Furlan, C. Bombardier, and L. Bouter, “Updated method guidelines for systematic reviews in the Cochrane Collaboration Back Review Group,” Spine, vol. 28, no. 12, pp. 1290–1299, 2003. View at Publisher · View at Google Scholar · View at Scopus
  15. A. K. Silbergleit, L. Schultz, B. H. Jacobson, T. Beardsley, and A. F. Johnson, “The dysphagia handicap index: development and validation,” Dysphagia, vol. 27, no. 1, pp. 46–52, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Y. Chen, R. Frankowshi, J. Bishop-Leone et al., “The development and validation of a dysphagia-specific quality-of-life questionnaire for patients with head and neck cancer,” Archives of Otolaryngology—Head and Neck Surgery, vol. 127, no. 7, pp. 870–876, 2001. View at Scopus
  17. V. Woisard, M. P. Andrieux, and M. Puech, “Validation of a self-assessment questionnaire for swallowing disorders (Deglutition Handicap Index),” Revue de Laryngologie Otologie Rhinologie, vol. 127, no. 5, pp. 315–325, 2007. View at Scopus
  18. S. Martin, I. Catarina, E. Therese, and O. Claes, “The dysphagia short questionnaire: an instrument for evaluation of dysphagia—a validation study with 12 months follow-up after anterior cervical spine surgery,” Spine, vol. 37, no. 11, pp. 996–1002, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. R. Bergamaschi, P. Crivelli, C. Rezzani et al., “The DYMUS questionnaire for the assessment of dysphagia in multiple sclerosis,” Journal of the Neurological Sciences, vol. 269, no. 1-2, pp. 49–53, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. J. T. Cohen and Y. Manor, “Swallowing disturbance questionnaire for detecting dysphagia,” Laryngoscope, vol. 121, no. 7, pp. 1383–1387, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. A. B. M. Grudell, J. A. Alexander, F. B. Enders et al., “Validation of the mayo dysphagia questionnaire,” Diseases of the Esophagus, vol. 20, no. 3, pp. 202–205, 2007. View at Publisher · View at Google Scholar · View at Scopus
  22. J. J. Sheppard and R. Hochman, “Screening large residential populations for dysphagia,” in Proceedings of the 42nd Annual Meeting of American Academy for Cerebral Palsy and Development Medicine, Toronto, Canada, 1988.
  23. N. Colodny, “Validation of the Caregiver Mealtime and Dysphagia Questionnaire (CMDQ),” Dysphagia, vol. 23, no. 1, pp. 47–58, 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. P. C. Belafsky, D. A. Mouadeb, C. J. Rees et al., “Validity and reliability of the eating assessment tool (EAT-10),” Annals of Otology, Rhinology and Laryngology, vol. 117, no. 12, pp. 919–924, 2008. View at Scopus
  25. R. Govender, M. T. Lee, T. C. Davies et al., “Development and preliminary validation of a patient-reported outcome measure for swallowing after total laryngectomy (SOAL questionnaire),” Clinical Otolaryngology, vol. 37, pp. 452–459, 2012. View at Publisher · View at Google Scholar
  26. K. L. Wallace, S. Middleton, and I. J. Cook, “Development and validation of a self-report symptom inventory to assess the severity of oral-pharyngeal dysphagia,” Gastroenterology, vol. 118, no. 4, pp. 678–687, 2000. View at Scopus
  27. R. C. Dwivedi, S. S. Rose, J. W. G. Roe et al., “Validation of the Sydney Swallow Questionnaire (SSQ) in a cohort of head and neck cancer patients,” Oral Oncology, vol. 46, no. 4, pp. e10–e14, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. R. C. Dwivedi, S. St.Rose, E. J. Chisholm et al., “Evaluation of swallowing by Sydney Swallow Questionnaire (SSQ) in oral and oropharyngeal cancer patients treated with primary surgery,” Dysphagia, vol. 27, pp. 491–497, 2012. View at Publisher · View at Google Scholar · View at Scopus
  29. J. G. Manjaly, P. G. Vaughan-Shaw, O. T. Dale, S. Tyler, J. C. R. Corlett, and R. A. Frost, “Cricopharyngeal dilatation for the long-term treatment of dysphagia on oculophayryngeal muscular dystrophy,” Dysphagia, vol. 27, pp. 216–220, 2012. View at Publisher · View at Google Scholar · View at Scopus