- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Volume 2010 (2010), Article ID 416796, 8 pages
Item Response Theory Analysis of Two Questionnaire Measures of Arthritis-Related Self-Efficacy Beliefs from Community-Based US Samples
1Department of Epidemiology, Mailman School of Public Health, Columbia University, New York, NY 10032, USA
2Department of Allied Health Science, Division of Physical Therapy, School of Medicine, University of North Carolina, Chapel Hill, NC 27599, USA
3Department of Psychology, The Ohio State University, Columbus, OH 43210, USA
4Thurston Arthritis Research Center, University of North Carolina, Chapel Hill, NC 27599, USA
5Departments of Medicine, Orthopedics, and Social Medicine, University of North Carolina, Chapel Hill, NC 27599, USA
Received 23 November 2009; Accepted 2 March 2010
Academic Editor: George D. Kitas
Copyright © 2010 Thelma J. Mielenz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Using item response theory (IRT), we examined the Rheumatoid Arthritis Self-efficacy scale (RASE) collected from a People with Arthritis Can Exercise RCT (346 participants) and 2 subscales of the Arthritis Self-efficacy scale (ASE) collected from an Active Living Every Day (ALED) RCT (354 participants) to determine which one better identifies low arthritis self-efficacy in community-based adults with arthritis. The item parameters were estimated in Multilog using the graded response model. The 2 ASE subscales are adequately explained by one factor. There was evidence for 2 locally dependent item pairs; two items from these pairs were removed when we reran the model. The exploratory factor analysis results for RASE showed a multifactor solution which led to a 9-factor solution. In order to perform IRT analysis, one item from each of the 9 subfactors was selected. Both scales were effective at measuring a range of arthritis SE.
The benefits from physical activity to improve arthritis outcomes are well established [1–5]. High self-efficacy (SE) has been shown to be associated with better arthritis health outcomes including adherence to physical activity recommendations . In fact, SE is one of the most important psychosocial determinants of physical activity behavior [7–11]. Bandura’s well-known definition of SE is based on social cognitive theory and “focuses on the individual’s personal confidence beliefs about his or her capacity to undertake behavior or behaviors that may lead to desired outcomes, such as health” . SE is a task-specific or behavior-specific construct meaning that to increase physical activity, then you only need to focus on SE for physical activity [13, 14].
Recent literature suggests the importance of evaluating both SE for a specific task and SE for disease self-care [6, 12]. More specifically, Marks et al. suggested that to be effective, interventions should focus not only on increasing SE for a specific task (e.g., physical activity) but also on enhancing arthritis SE (i.e., disease self-care) [6, 12]. This approach is supported by Kovar et al.’s intervention study evaluating a walking program in patients with knee osteoarthritis. They found that enhancing both physical activity SE and SE for arthritis self-care led to improvements in function without an increase in symptoms .
Because SE is modifiable, there is increasing interest in interventions. If effective interventions are to be designed to increase SE for arthritis self-management, then accurate measurement of SE is crucial. An on-going challenge has been in identifying people with low SE for disease self-management in sample populations of persons with chronic diseases like arthritis [6, 12]. To assess this precision of SE measurement, we examined two SE for arthritis scales using item response theory (IRT) in participants from two community-based randomized controlled trials (RCTs) on physical activity in adults with arthritis.
IRT represents “a diverse family of models designed to represent the relation between an individual’s item response and underlying latent trait” . IRT has several notable benefits. First, in the context of health outcomes and disability, IRT models allow for the differential weighting of items in terms of their severity. IRT also provides item and test information functions. Information functions describe not only how much information is provided by a given item or test, but also where that information is provided. This knowledge can play a crucial role when choosing a scale for a particular purpose. One scale may measure low levels of SE very well but fail to adequately assess higher levels. We hypothesized that the two SE scales studied here will possess different measurement characteristics. These different measurement characteristics will provide guidance in determining which measure is preferred depending on the situation with the overall goal of increasing the precision of SE measurement.
2. Materials and Methods
The first RCT compares outcomes of People with Arthritis Can Exercise (PACE). Detailed methods for the PACE RCT are outlined by Callahan et al. in the main paper . The PACE project team worked in conjunction with the NC Arthritis Program and with community facilities throughout the state including senior centers, assisted living communities, community centers, churches, and wellness centers to recruit participants. The project conducted classes and assessments at 18 sites in counties throughout North Carolina. Class enrollment at the sites ranged from 6 to 34 participants, with a total of 346 participants recruited. The participants had to be exercising 3 times a week for 20 minutes at a time to enroll. The baseline assessments were conducted from August 2003 to November 2003. The demographics included a mean age of 70, 90% female, 75% Caucasian, and 60% had more than a high school degree. Both the baseline and eight-week follow-up assessments involved administering self-report measures on symptoms, function (including physical performance tests), physical activity, and psychosocial outcomes. At the end of the 8-week intervention, study participants in the intent-to-treat analysis showed decreased pain and fatigue and increased arthritis SE .
Active Living Every Day (ALED) is a 20-week lifestyle program designed to teach behavioral skills to become and stay physically active [18, 19]. The goal of the second RCT was to evaluate ALED compared to a delayed control in individuals with arthritis. The ALED instructors were recruited with the help of the North Carolina Area Agencies on Aging. The instructors were trained in Chapel Hill, NC in December 2003 by one of the original program developers from the Cooper Institute. Three-hundred and fifty-four sedentary (exercising 3 times a week) participants enrolled from 17 urban and rural sites recruited in a similar manner as PACE above. The demographics for this study population include a mean age of 69 years, approximately 80% female, 75% Caucasian, and 50% had more than a high school education. Self-report assessments are on function (including physical performance), symptoms, physical activity, and psychosocial outcomes at baseline and 20-weeks. Two-level (site 2nd level) regression models were used to determine adjusted mean outcome values for the intervention and control groups at 20 weeks. In the intent-to-treat analyses, the intervention group showed improvement over the control group for all outcomes and significant changes for several outcomes including gait speed, 2-minute step, and scores on the Community Healthy Activities Model Program for Seniors (CHAMPS) physical activity scale .
The 28-item Rheumatoid Arthritis SE scale (RASE) was completed by PACE participants at baseline and the 8-week follow-up; this study uses the baseline data. The RASE scale measures confidence in one’s ability to perform specific self-management behaviors for individuals with all forms of arthritis even though it was initially developed for individuals with rheumatoid arthritis [20, 21]. The scale is self-administered and takes approximately ten minutes to complete. Scores from the RASE are created by summing the 28 items with a five-point Likert response pattern, yielding a possible range of 28 to 140 points. Higher scores indicate higher SE for arthritis self-management [20, 21]. The RASE has demonstrated sensitivity to change following a self-management education program (5.2, SD 15.5) . The baseline RASE score in the PACE study was 105.05, SD 12.66.
The 5-item Pain (PSE) and the 6-item Other Symptoms (OSE) subscales from the Arthritis SE scale (ASE) were collected from the ALED participants at baseline and at 20 week follow-up; again this study uses the baseline data. The ASE scale was developed by Lorig and colleagues to measure a respondents’ SE for arthritis self-management behaviors (e.g., decreasing pain, keeping pain from interfering with normal activities, and dealing with the frustration of having arthritis) . These two subscales are estimated to take approximately five minutes to complete. The 9-item Function subscale is the third subscale of the ASE but was not collected in ALED . The items were scored with a 10-point response pattern, with one representing “very uncertain” and 10 “very certain.” Lorig et al. found the 5-item PSE and the 6-item OSE subscales both sensitive-to-change when evaluating the Arthritis Self-Management course using the ASE . The baseline scores in the ALED study are PSE 6.63 (SD 2.06) and OSE 6.94 (2.14).
Table 1 displays the items from the RASE and ASE utilized in this study.
The goal of this series of analyses is to obtain IRT-based item parameters for both the ASE and RASE. Our original intention was to perform a unidimensional IRT analysis of both scales. Although published literature suggests that each scale exhibits multidimensionality, it is often the case that different approaches will yield different results [20, 22]. Even if the scales are found to be multidimensional, there are a number of strategies available to handle such a scale. We therefore performed the analyses with an eye towards identifying unidimensional scales, while being mindful of the potential for multiple dimensions. Exploratory and confirmatory factor analyses (EFA and CFA) were used to assess the extent to which a one-dimensional model could adequately explain the observed item responses. EFAs were conducted in CEFA using ordinary least squares (OLS) estimation, polychoric correlations, and oblique quartimax rotations (where necessary) . In the EFA we focused on the scree plots, and if there was evidence of more than one factor, then we focused on the resulting factor loading matrix. The CFAs were conducted in LISREL, again with polychoric correlations, but this time using diagonally weighted least squares (DWLSs) estimation to provide correct fit indices (see Wirth and Edwards, 2007, for a more detailed description) [24, 25]. There are a number of fit indices available when conducting structural equation modeling-based CFA, but we have found a combination of the root mean square error of approximation (RMSEA), comparative fit index (CFI), and the root mean square error (RMSE) providing a nice balance of information regarding how well the model accounts for the observed data [26, 27]. RMSEA values less than 0.05 are viewed as indicating good model fit, values between 0.05 and 0.1 indicate moderate model fit, and values greater than 0.1 generally indicate poor model fit. CFI values greater than 0.9 indicate reasonable model fit with values over 0.95 indicating good model fit. RMSE values less than 0.1 indicate good model fit. We favor the RMSEA, CFI, and RMSE (in that order) as indicators of fit given the existing literature on model fit.
Once a sufficiently unidimensional set of items had been identified, an IRT analysis was performed on each scale using the graded response model (GRM) as implemented in the Multilog software package [28, 29]. Following the IRT analysis we examine the estimated item parameters, standard error curve (SEC), and test information function (TIF) to better understand both how individual items are contributing to the scale and how the scale is functioning as a whole. Prior to any factor analytic or IRT analyses we collapsed any category which was chosen by less than 2% of the respondent. This led to no collapsing on the ASE (which was surprising, given that each item had 10 response categories) and minimal collapsing on the RASE.
This study was approved by the University of North Carolina Biomedical institutional review board and it was conducted with the understanding and the consent of the human subjects.
3. Results and Discussion
The analyses proceeded differently for the ASE and the RASE scales and in light of this we present the results from each in separate sections below.
3.1. ASE Results and Discussion
The initial validation study on the ASE found evidence for two and three factors. We focused on the items comprising what Lorig et al. titled the PSE and OSE subscales . Although these were found to constitute two separate factors in the original study, our results suggest that they are adequately explained by one factor. The scree plot from these 11 items is shown in Figure 1. The scree plot suggests that there is one dominant factor. A one-factor model was fit in a CFA framework to assess model fit. The fit of the one factor model to the 11 items was poor (, , ), at least judging by the RMSEA, which is the fit index we tend to focus on. There was some evidence in this solution for two locally dependent item pairs (1 & 2 and 4 & 5). LISREL automatically calculates modification indices (MIs) for parameters that are constrained in a particular model. In theory, they are chi-square distributed with one degree of freedom and represent the expected improvement in model fit if a particular parameter was freely estimated. The covariances among the residuals are typically constrained to zero in CFA models. Large MI values for particular residual covariances suggest that, even after accounting for their shared relationship to the latent construct, items are more related to one another than the model predicts. We removed one item from each pair (1 & 5) and reran the model with the remaining nine items. This model seems to adequately explain the observed data (, , ).
Before moving to an IRT analysis, we wanted to be sure that the two-factor model was not more appropriate for these data. We fit a basic two-factor model and then, when the same evidence for locally dependent pairs arose, we added correlated errors to accommodate that excess covariance. Although the two factor model with two correlated errors fits well (, , ), the correlation between the two factors was estimated at 0.95. A correlation of this magnitude strongly suggests that those two factors are, in fact, one factor.
Based on the strength of the factor analytic results we performed a unidimensional IRT analysis. In keeping with the results from the one-factor CFA, we omitted Items 1 and 5 from the IRT analysis. The parameter estimates from that analysis are given in Table 2. Although some of the slope parameters are high, subsequent analyses suggest that they are not inflated due to local dependence. The SEC and TIF for the modified 9-item version of the ASE are shown in Figure 2. As can be seen here the resulting scale provides highly reliable scores between −2.5 and 2 standard deviations. The precision quickly drops as scores increase above 2, as is noted by the increasing standard error curve and decreasing information curve. The marginal reliability for the nine-item scale was 0.95.
The factor analytic results suggest that, despite published literature to the contrary, the PSE and OSE subscales from the ASE can be adequately accounted for by one underlying construct . We identified two locally dependent pairs of items and dealt with this by removing two items. In addition to alleviating the local dependence, this has the added benefit of shortening the scale slightly.
3.2. RASE Results and Discussion
The EFA results showed not only one dominant eigenvalue (11.0), but also two other sizeable subsequent eigenvalues (2.9 & 2.1). A three-factor solution was estimated, but the resulting factors did not appear coherent from a substantive standpoint. One- and three-factor models were fit in a CFA framework to provide fit indices. The one-factor model did not fit particularly well (, , ), but a three-factor model with a few cross loadings provided an appreciably better fit (, , ). Table 3 contains the factor loadings from this three-factor model. Despite the reasonable fit of this model, we found the lack of substantive coherence to be troubling.
The original validation study of the RASE suggested that it had eight factors and an additional three “orphan” items which did not load on any of those eight factors . We attempted to replicate their final model in a CFA framework, but the estimator converged to an inadmissible solution. Although several attempts were made to modify this model, all resulting solutions were inadmissible.
At this point, we went back to the items themselves and performed our own categorization process, where the number of factors and factor structure was determined based on a reading of the items. This led us to a nine-factor solution. We fit this model in a CFA framework and the model fit quite well (, , ). In an attempt to better understand the structure of this scale, we then fit a second-order factor model where a higher-order factor was underlying the nine lower order factors. While no direct comparisons between this and the base nine-factor CFA are possible (the models are unfortunately not nested), we note that the second-order model did account for these data reasonably well (, , ).
These results suggest that although there may be one common construct underlying the responses to the items found on the RASE, it does so through nine subfactors. To the extent that there are different numbers of items representing each of these subfactors, the resulting summed score will be a weighted combination of them. In an effort to avoid this weighting and to see if it would be possible to perform a unidimensional IRT analysis on a subset of the 28-item RASE, we selected one item from each of the nine subfactors. When choosing items, we tried to balance statistical characteristics (choosing items with high factor loadings in earlier analyses) and content validity (insuring that the resulting collection of items had face validity). The fit of a one-dimensional model for these nine items was then assessed using CFA. This model fits the data well (, , ), which suggests that for this nine-item subset, unidimensionality is a plausible assumption.
An IRT analysis was then conducted on those nine items. The resulting scale had a marginal reliability of 0.84 and with the exception of one item, all slopes were greater than one (item parameters are provided in Table 4). As indicated in Figure 3, the nine-item subset has a relatively uniform level of measurement precision (standard errors between 0.3 and 0.4) between 3 and 2 standard deviations.
The factor analytic work for the RASE was substantially more complex than for the ASE. Neither the one-dimensional model we were hoping for nor the eight-dimensional model presented in the literature provided an adequate explanation of the RASE data. We went back to the item content created our own “bins” into which the items appeared to fall, which led us to a nine factor model. This model had good fit to the data and an additional higher-order model also had good fit. As previously mentioned, these two results suggest that while there may be nine subfactors, they are all related to some overarching latent factor. We proceeded by choosing one item from each subfactor to serve as the representative item for the subfactor on a shortened RASE.
The two populations here are from the Southeastern US and both populations have similar demographics that are somewhat homogenous (i.e., primarily female, educated, and Caucasian). The retrospective recall reliance of these self-efficacy measures is a limitation especially for the RASE which has in its direction “even if you are not actually doing it at the moment” . These scales are only analyzed cross-sectionally because the analyses proved to be much more complex determining the ability for each of these scales to detect change to be too in-depth for one manuscript. Cross population comparisons were not possible because we did not have data on both measures in one sample. We originally planned to equate these two arthritis SE scales but the wording variations were slight enough not to allow common-item equating procedures . Although we were not successful, our results may be informative to future researchers who wanted to utilize common-item procedures on these scales.
We acknowledge that there are more complex solutions for a scale like the RASE. However, the alternative proposed here (the modified 9-item RASE) has the virtue of being shorter, representative of the construct of interest, and easy to implement with currently existing IRT software. In summary, these results show that, if necessary, unidimensional IRT could be used with a scale exhibiting the complex hierarchical structure of the RASE.
While the 9-item modified version of the two ASE subscales presented here is very effective at measuring much of the range of arthritis self-efficacy, it is not precise for individuals with very high levels (2 standard deviations above the mean) of arthritis self-efficacy. The same holds for our modified 9-item version of the two RASE subscales. However, considering the very small number of individuals we would anticipate to have scores to be high (roughly 2.5%); this is not a serious weakness. When it would potentially become problematic is if either scale were being used to assess a treatment which was highly effective. In this case, either scale may exhibit a ceiling effect which could mask improvement beyond a certain level. Although any comparison between the scales must be made with caution, it does appear that the 9-item modified version of the two ASE subscales is able to provide more precise estimates than the modified 9-item RASE. This study is a first step towards increasing the precision of identifying those people with arthritis and low SE. This information may better inform SE-enhancing interventions .
List of Abbreviations
|IRT:||Item response theory|
|RCT:||Randomized controlled trials|
|RASE:||Rheumatoid arthritis self-efficacy scale|
|PACE:||People with Arthritis Can Exercise|
|ASE:||Arthritis self-efficacy scale|
|ALED:||Active Living Every Day|
|EFA and CFA:||Exploratory and confirmatory factor analysis|
|CHAMPS:||Community Healthy Activities Model Program for Seniors|
|PSE:||Pain subscale from the Arthritis self-efficacy scale|
|OSE:||Other Symptoms subscale from the Arthritis self-efficacy scale|
|CEFA:||Comprehensive exploratory factor analysis|
|OLS:||Ordinary least squares|
|LISREL:||Linear structural relations|
|DWLS:||Diagonally weighted least squares|
|RMSEA:||Root mean square error of approximation|
|CFI:||Comparative fit index|
|RMSE:||Root mean square error|
|GRM:||Graded response model|
|SEC:||Standard error curve|
|TIF:||Test information function.|
The authors declare that they have no competing interests.
Thelma J. Mielenz conceived and designed the study and acquired the funding, participated in analysis and interpretation of the data, drafted the manuscript, and participated in the acquisition of funding and data collection of the original data. Michael C. Edwards participated in the study design, analysis, and interpretation of the data, and helped to draft the manuscript. Leight F. Callahan participated in the design of the study, revised this manuscript, and was the PI of the 2 RCTs that acquired this data. All authors read and approved the final manuscript.
Grant support for this manuscript includes the American College of Rheumatology Research and Education Foundation Health Professional New Investigator Award (study design, analysis, and interpretation of the data; in writing of the manuscript and the decision to submit the manuscript for publication), a North Carolina Chapter’s Arthritis Foundation New Investigator Award (analysis, and interpretation of the data; in writing of the manuscript), and the Centers for Disease Control and Prevention through grants from the Association of American Medical Colleges (MM-0275-03/03 and MM-0644-04- for data collection).
- U.S. Department of Health and Human Services, Physical Activity and Health: A Report of the Surgeon General, Centers for Disease Control and Prevention, Atlanta, Ga, USA, 1999.
- M. S. Kaplan, N. Huguet, J. T. Newsom, and B. H. McFarland, “Characteristics of physically inactive older adults with arthritis: results of a population-based study,” Preventive Medicine, vol. 37, no. 1, pp. 61–67, 2003.
- W. H. Ettinger Jr., R. Burns, S. P. Messier, et al., “A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis,” Journal of the American Medical Association, vol. 277, no. 1, pp. 25–31, 1997.
- A. Hakkinen, P. Hannonen, K. Nyman, T. Lyyski, and K. Hakkinen, “Effects of concurrent strength and endurance training in women with early or longstanding rheumatoid arthritis: comparison with healthy subjects,” Arthritis and Rheumatism, vol. 49, no. 6, pp. 789–797, 2003.
- M. A. Minor, “2002 Exercise and physical activity conference, St Louis, Missouri: exercise and arthritis “we know a little bit about a lot of things em leader ”,” Rheumatoid Arthritis, vol. 49, pp. 1–2, 2003.
- R. Marks, J. P. Allegrante, and K. Lorig, “A review and synthesis of research evidence for self-efficacy-enhancing interventions for reducing chronic disability: implications for health education practice (part II),” Health Promotion Practice, vol. 6, no. 2, pp. 148–156, 2005.
- C. Keller, J. Fleury, N. Gregor-Holt, and T. Thompson, “Predictive ability of social cognitive theory in exercise research: an integrated literature review,” Journal of Knowledge Synthesis for Nursing, vol. 6, p. 2, 1999.
- E. McAuley, “The role of efficacy cognitions in the prediction of exercise behavior in middle-aged adults,” Journal of Behavioral Medicine, vol. 15, no. 1, pp. 65–88, 1992.
- R. E. Rhodes, A. D. Martin, J. E. Taunton, E. C. Rhodes, M. Donnelly, and J. Elliot, “Factors associated with exercise adherence among older adults. An individual perspective,” Sports Medicine, vol. 28, no. 6, pp. 397–411, 1999.
- N. E. Sherwood and R. W. Jeffery, “The behavioral determinants of exercise: implications for physical activity interventions,” Annual Review of Nutrition, vol. 20, pp. 21–44, 2000.
- K. L. Kubiak, The association of self-efficacy and outcome expectations with physical activity in adults with arthritis, thesis/dissertation, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA, 2004.
- R. Marks, J. P. Allegrante, and K. Lorig, “A review and synthesis of research evidence for self-efficacy-enhancing interventions for reducing chronic disability: implications for health education practice (part I),” Health Promotion Practice, vol. 6, no. 1, pp. 37–43, 2005.
- A. Bandura, Social Foundations of Thought and Action, Prentice-Hall, Englewood Cliffs, NJ, USA, 1986.
- A. Bandura, Self-Efficacy the Exercise of Control, W.H. Freeman, New York, NY, USA, 1997.
- P. A. Kovar, J. P. Allegrante, C. R. MacKenzie, M. G. E. Peterson, B. Gutin, and M. E. Charlson, “Supervised fitness walking in patients with osteoarthritis of the knee: a randomized, controlled trial,” Annals of Internal Medicine, vol. 116, no. 7, pp. 529–534, 1992.
- R. C. Fraley, N. G. Waller, and K. A. Brennan, “An item response theory analysis of self-report measures of adult attachment,” Journal of Personality and Social Psychology, vol. 78, no. 2, pp. 350–365, 2000.
- L. F. Callahan, T. Mielenz, J. Freburger, et al., “A randomized controlled trial of the people with arthritis can exercise program: symptoms, function, physical activity, and psychosocial outcomes,” Arthritis Care and Research, vol. 59, no. 1, pp. 92–101, 2008.
- L. F. Callahan, T. Mielenz, K. Donahue, J. Shreffller, J. M. Hootman, and T. Brady, “A randomized trial (RCT) of Active Living Every Day (ALED) in individuals with arthritis,” Arthritis Care and Research, vol. 54, no. 9, pp. S816–S817, 2006.
- L. F. Callahan, B. Schoster, J. Hootman, et al., “Modifications to the Active Living Every Day (ALED) course for adults with arthritis,” Preventing Chronic Disease, vol. 4, no. 3, article A58, 2007.
- S. Hewlett, Z. Cockshott, J. Kirwan, J. Barrett, J. Stamp, and I. Haslock, “Development and validation of a self-efficacy scale for use in British patients with rheumatoid arthritis (RASE),” Rheumatology, vol. 40, no. 11, pp. 1221–1230, 2001.
- T. J. Brady, “Measures of self-efficacy, helplessness, mastery, and control: the Arthritis Helplessness Index (AHI)/Rheumatology Attitudes Index (RAI), Arthritis Self-Efficacy Scale (ASES), Children's Arthritis Self-Efficacy Scale (CASE), Generalized Self-Efficacy Scale (GSES), Mastery Scale, Multi-Dimensional Health Locus of Control Scale (MHLC), Parent's Arthritis Self-Efficacy Scale (PASE), Rheumatoid Arthritis Self-Efficacy Scale (RASE), and Self-Efficacy Scale (SES),” Arthritis Care and Research, vol. 49, pp. S147–S164, 2003.
- K. Lorig, R. L. Chastain, E. Ung, S. Shoor, and H. R. Holman, “Development and evaluation of a scale to measure perceived self-efficacy in people with arthritis,” Arthritis and Rheumatism, vol. 32, no. 1, pp. 37–44, 1989.
- M. C. Browne, R. Cudeck, K. Tateneni, and G. Mels, “CEFA: comprehensive exploratory factor analysis,” 2004.
- K. G. Jöreskog and D. Sörbom, “LISREL,” Scientific Software International, Chicago, Ill, USA, 2004.
- R. J. Wirth and M. C. Edwards, “Item factor analysis: current approaches and future directions,” Psychological Methods, vol. 12, no. 1, pp. 58–79, 2007.
- P. M. Bentler, “Comparative fit indexes in structural models,” Psychological Bulletin, vol. 107, no. 2, pp. 238–246, 1990.
- M. W. Browne and R. Cudeck, “Alternative ways of assessing model fit,” in Testing Structural Equation Models, K. A. Bollen and J. S. Long, Eds., pp. 136–162, Sage, Newbury Park, Calif, USA, 1993.
- F. Samejima, “Estimation of latent ability using a response pattern of graded scores,” Psychometrika, vol. 35, supplement 17, p. 139, 1969.
- D. Thissen, MULTILOG: Multiple, Categorical Item Analysis and Test Scoring Using Item Response Theory, Scientific Software, Mooresville, Ill, USA, 1991.
- M. J. Kolen and R. L. Brennan, Test Equating, Scaling, and Linking, Springer, New York, NY, USA, 2004.