Research Article | Open Access
M. Mushfiqur Rahman, Jacek A. Kopec, Charlie H. Goldsmith, Aslam H. Anis, Jolanda Cibere, "Validation of Administrative Osteoarthritis Diagnosis Using a Clinical and Radiological Population-Based Cohort", International Journal of Rheumatology, vol. 2016, Article ID 6475318, 7 pages, 2016. https://doi.org/10.1155/2016/6475318
Validation of Administrative Osteoarthritis Diagnosis Using a Clinical and Radiological Population-Based Cohort
Objectives. The validity of administrative osteoarthritis (OA) diagnosis in British Columbia, Canada, was examined against X-rays, magnetic resonance imaging (MRI), self-report, and the American College of Rheumatology criteria. Methods. During 2002–2005, 171 randomly selected subjects with knee pain aged 40–79 years underwent clinical assessment for OA in the knee, hip, and hands. Their administrative health records were linked during 1991–2004, in which OA was defined in two ways: (AOA1) at least one physician’s diagnosis or hospital admission and (AOA2) at least two physician’s diagnoses in two years or one hospital admission. Sensitivity, specificity, and predictive values were compared using four reference standards. Results. The mean age was 59 years and 51% were men. The proportion of OA varied from 56.3 to 89.7% among men and 77.4 to 96.4% among women according to reference standards. Sensitivity and specificity varied from 21 to 57% and 75 to 100%, respectively, and PPVs varied from 82 to 100%. For MRI assessment, the PPV of AOA2 was 100%. Higher sensitivity was observed in AOA1 than AOA2 and the reverse was true for specificity and PPV. Conclusions. The validity of administrative OA in British Columbia varied due to case definitions and reference standards. AOA2 is more suitable for identifying OA cases for research using this Canadian database.
Osteoarthritis (OA) is one of the most prevalent chronic health conditions that causes disability among the elderly [1, 2]. While the prevalence of OA in the general population depends on the joint sites, diagnostic methods, sex, age range, and geographic region, approximately 10–12% of the global population have OA [3–6]. In epidemiologic research, there is no simple way to define the presence or absence of OA or to distinguish between incident and progressive disease. However, an accurate estimate is necessary for the policy makers and healthcare professionals to improve the health condition of OA patients through disease management and public health programs [4, 5, 7–9]. In the British Columbia (BC) administrative database, the overall prevalence rate of OA in any joint was 10.8% in 2001 . Other international studies reported the prevalence of radiographic, symptomatic, and self-reported OA in the knee, hip, and hand joints [2, 6, 10–12].
The most common way to diagnose OA cases is the radiographic examination using Kellgren-Lawrence (K-L) grading system . Other methods include magnetic resonance imaging (MRI) [14, 15] and self-reporting . Knee, hand, and hip OA are also assessed using the American College of Rheumatology (ACR) clinical criteria [16–18]. Administrative health records represent useful resources for chronic disease surveillance because the data are routinely collected, cover wide geographic areas, and capture the great majority of the subjects registered in the healthcare system. Recently, these databases have been frequently used for health research, where OA cases are identified on the basis of several definitions using International Classification of Disease (ICD) codes [3, 19, 20]. Utilizing these data requires assessing the validity of case definitions. The accuracy of administrative OA case definitions has been validated in previous studies against self-reported population surveys  and medical records . However, these studies covered only 2–5 years of observation and did not include MRI assessments.
In this study we aimed to examine the validity of OA diagnoses recorded in the BC administrative database. Our primary objective was to determine the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratios of two administrative case definitions of OA. We examined the accuracy of these definitions using four reference standards that include X-rays, MRI, self-reports, and the ACR clinical criteria. Evaluating the validity of administrative OA diagnoses is an important step in conducting further research using these databases.
2. Materials and Methods
2.1. Data Source
A cohort of 255 subjects with knee pain was recruited through population random sampling from Vancouver, BC, during the period August 2002 to February 2005. The subjects met inclusion criteria if they were between 40 and 79 years of age and had pain, aching, or discomfort in or around the knee at any time in the past 12 months. Subjects who had inflammatory arthritis, fibromyalgia, knee arthroplasty, a history of knee surgery/injury within the past 6 months, knee pain referred from the hip or back, and inability to undergo MRI were excluded. From the greater Vancouver telephone directory, 5,231 English-speaking persons were randomly contacted, of whom 3,269 (62.5%) agreed to participate in the survey. From the 3,269 subjects, 91.9% were ineligible due to age restriction and other exclusion criteria. Of the remaining 265 subjects, 10 were excluded due to missed appointments and for other reasons. The study sample recruitment procedure has been described elsewhere . Finally, 255 selected subjects underwent comprehensive clinical assessment, standardized joint examination, X-rays, and MRI to identify knee OA. Of the 255 subjects, clinical data on 171 were linked with the administrative health records for the period 1991–2004 through personal health numbers, because written consents were available for only these subjects. The BC Ministry of Health approved access to and use of the data facilitated by Population Data BC for this study. This administrative database consists of linkage of the Medical Service Plan (MSP) payment information for the period 1990/91–2003/04 , the PharmaCare data for the period 1990–2004 , and Hospital separation records for the period 1990/91–2003/04 . Administrative database includes information on date of birth, sex, physician billing information for any health consultation, socioeconomic status by area of residence, hospital diagnoses, dates of hospital admissions, the 9th and 10th revisions of the ICD codes (ICD-9 and ICD-10, resp.), and death records of all individuals registered in the Medical Service Plan (MSP) of BC. MSP is a publicly funded plan in which approximately 99% of BC residents are registered. The study was approved by the Clinical Research Ethics Board at the University of British Columbia, Canada.
2.2. Administrative Definition of OA
Administrative OA was defined in two ways based on ICD-9 and ICD-10 codes, referred to as AOA1 and AOA2. AOA1 required at least one visit to a health professional or one hospital admission with the ICD-9 code of 715 or the ICD-10 codes from M15 to M19, and AOA2 required at least two visits to health professionals in two years separated by at least one day or one hospital admission with these codes. For AOA2, the date of the second qualifying visit was used to assign the diagnosis date. These ICD codes include symptomatic and radiographic OA in any joint except the spine. The most commonly used pain medications for OA treatment are acetaminophen and nonsteroidal anti-inflammatory drugs [8, 26]. Often these medications are available over the counter and require no prescriptions. Thus, it is not appropriate to include the history of pain medication use in OA case definitions.
2.3. Knee, Hand, and Hip OA Assessment
Knee OA was assessed with a comprehensive questionnaire which included duration of knee pain, frequency of pain (number of days over the past month), and pain location using a knee diagram . A standardized knee examination was performed by a rheumatologist . The ACR clinical criteria for knee OA  include pain in the knee and any three of the following: (1) over 50 years of age, (2) less than 30 minutes of morning stiffness, (3) crepitus on active motion, (4) bony tenderness, (5) bony enlargement, and (6) no palpable warmth. The presence of hand OA was determined by using ACR criteria for hand OA, which included pain, aching, or stiffness in the hand and any three of the following conditions: (1) hard tissue enlargement of two or more of the following joints: 2nd and 3rd distal interphalangeal, the 2nd and 3rd proximal interphalangeal, and the 1st carpometacarpal joints of both hands; (2) hard tissue enlargement of 2 or more distal interphalangeal joints; (3) less than three swollen metacarpophalangeal joints; (4) deformity of 2 or more joints listed in (1) . Although ACR criteria for hip OA include pain in the hip and any two of the following: (1) ESR < 20 mm/hour, (2) radiographic femoral or acetabular osteophytes, and (3) radiographic joint space narrowing , only hip pain was assessed in our study.
2.4. Radiographic K-L Grade
Knee radiography was completed within a month of the clinical assessment. Details on X-ray procedures have been described previously [22, 29]. X-rays were scored using the K-L 0–4 grading system  independently by 2 readers who were blinded to the clinical and MRI information. The intraclass correlation coefficient was 0.79 and the differences in readings were adjudicated by consensus readings by the 2 readers. Subjects were classified as having radiographic OA if their K-L grade was greater than or equal to 2.
2.5. MRI Cartilage Score
MRI for the most painful knee was performed within a month of clinical assessment. Detailed information regarding how MRI was performed has been described previously . Briefly, six joint areas were assessed, including the medial and lateral tibial plateau and femoral condyles, patella, and trochlear groove. Cartilage was graded on a semiquantitative scale of 0–4 based on the following definitions: 0 = normal, 1 = abnormal signal without a cartilage contour defect, 2 = contour defect of less than 50% cartilage thickness, 3 = contour defect of 50–99% cartilage thickness, and 4 = 100% cartilage contour defect with subjacent bone signal abnormality [30, 31]. The MRIs were read by a single reader, who was blinded to the radiographic and clinical information. The intrarater reliability of the cartilage readings was high, varying from 0.84 to 1.0 for different cartilage surfaces. Based on the MRI cartilage scores, subjects were classified as having knee OA if the score was greater than or equal to 2.
2.6. OA by Self-Report
In the baseline questionnaire, knee OA was assessed by asking two questions. (1) “Has a doctor ever told you that you have osteoarthritis (also called degenerative or wear-and-tear arthritis) in your right knee?”, and (2) “has a doctor ever told you that you have osteoarthritis (also called degenerative or wear-and-tear arthritis) in your left knee?” Pain in the hip joints was assessed by the following instruction: “In the following homunculus diagram each circle represents a joint. Please mark each joint where you have experienced pain or discomfort over the past 12 months.” We counted subjects if they marked in the hip joints in the homunculus diagram.
2.7. Reference Standard
For the selected subjects, knee OA was assessed based on the above four measurements. In addition, hand and hip OA were assessed using the ACR clinical criteria and the self-reported hip pain, respectively. Based on the knee, hand, and hip OA assessments, we defined four reference standards: RS1, RS2, RS3, and RS4. RS1 included assessments of knee and hand OA based on the ACR clinical criteria and hip OA based on self-reported hip pain. RS2 included assessments of knee, hand, and hip OA based on K-L grade, ACR clinical criteria, and self-reported hip pain, respectively. RS3 included assessments of knee, hand, and hip OA based on MRI cartilage score, ACR clinical criteria, and self-reported hip pain, respectively. RS4 included assessments of knee, hand, and hip OA based on self-reports, ACR clinical criteria, and self-reported hip pain, respectively. The same measurements for hand and hip OA were consistently included in the four reference standards.
2.8. Statistical Analysis
Baseline characteristics of the cohort were age, body mass index (BMI) (kg/meter2), hip pain, symptomatic hand OA, and pain medication used. These characteristics were determined separately for men and women. We calculated the sensitivity, specificity, PPV, and NPV, for each case definition according to four reference standards. The 95% confidence intervals (CIs) were calculated for these statistics. For more detail about these measures, please refer to Rothman et al. . In addition, we have calculated likelihood ratios (LR+ and LR−) and their 95% CIs, with LR+ = positive likelihood ratio = sensitivity/(1 − specificity), and LR− = negative likelihood ratio = (1 − sensitivity)/specificity. All analyses were performed using SAS V.9.3 (SAS Institute, Cary, NC, USA).
Characteristics of 171 subjects by sex are presented in Table 1. The mean age of the subjects was 59 years, and 51% were men. The BMI ranged from 19 to 43 and men were more overweight and obese than women ( value = 0.02). Hip pain and hand OA were more common in women than in men ( value < 0.01). Statistically significant differences between men and women were observed for the proportion diagnosed with OA by each of the four reference standards except for RS3. Among the four different knee OA measurements, MRI detected the highest percentages of OA (91.7% in women and 88.5% in men) and X-rays detected the lowest percentages of OA (42.9% in women and 44.9% in men).
|RS1, RS2, RS3, and RS4 are four reference standards including knee, hand, and hip OA which are described in Methods.|
K-L = Kellgren-Lawrence, self-report = self-reported physician diagnosed knee OA, MRI = magnetic resonance imaging, and ACR = American College of Rheumatology.
The validation results of two administrative OA definitions compared to the four reference standards are presented in Table 2. The sensitivity of case definitions AOA1 and AOA2 varied from 47 to 57% and 21 to 26%, respectively. Higher sensitivity was observed in AOA1 compared to AOA2 and the highest sensitivity (95% CI) was 57% (48–66%) for AOA1 when the reference standard included self-reported physician diagnosed knee OA. The specificity varied from 75 to 87% for AOA1 and from 91 to 100% for AOA2. The highest specificity (95% CI) was 100% (70–100%) for AOA2 when the reference standard included MRI of the knee OA. PPVs varied from 82 to 96% for AOA1 and from 85 to 100% for AOA2. The lowest NPV (95% CI) was 9% (5–15%) for AOA2 when the reference standard included MRI score for knee OA. The positive likelihood ratio (LR+) was greater than 5 in AOA2 for the reference standards RS3 and RS4, and therefore AOA2 may be useful in ruling in OA. On the other hand, values of negative likelihood ratio (LR−) were between 0.5 and 0.8. Therefore, these definitions may not be very useful to ruling out OA .
|AOA1 includes at least one visit to a health professional or one hospital admission for osteoarthritis and AOA2 includes at least two visits to health professionals in two years or one hospital admission for osteoarthritis. RS1, RS2, RS3, and RS4 are four reference standards that include knee, hand, and hip OA which are described in Methods.|
PPV: positive predictive value; NPV: negative predictive value; LR+: positive likelihood ratio = sensitivity/(1 − specificity); LR−: negative likelihood ratio = (1 − sensitivity)/specificity.
Based on the BC administrative health records, we have assessed the validity of two case definitions of OA using four reference standards. The reference standards included radiographic K-L grade, MRI cartilage scores, self-reports, and the ACR clinical criteria for the knee OA assessments, the ACR clinical criteria for the hand OA assessments, and self-reported hip pain records for the hip OA assessments. Of the two administrative definitions, AOA1 had the higher sensitivity and NPV whereas AOA2 had the higher specificity and PPV. Validity measures were similar among the four reference standards in each case definition, while both case definitions of OA yielded a PPV of more than 82%.
Our validation results are comparable with those obtained in Lix et al.’s  study in which self-reported survey data were used as a reference standard. Using two years of data and the definition of at least two physician’s diagnoses or one hospital separation, the authors obtained a sensitivity of 42.6% and a specificity of 88.1%. For the definition based on one physician’s diagnosis, they obtained a higher sensitivity but a lower specificity, which is consistent with our results. The administrative health records may include some individuals whose OA has gone undiagnosed during the observation period. This could potentially contribute to the lower-than-expected sensitivities in both case definitions. After examining the medical history of OA cases over a period of two years, Harrold et al.  obtained a PPV of 62% for administrative OA diagnoses. The likely reason why we obtained higher PPVs was that we used 13 years of administrative records and the prevalence of OA was higher in our cohort. In our cohort, the majority of the subjects had preradiographic disease (K-L < 2); we observed that 90% of these symptomatic subjects had knee OA based on MRI cartilage assessment. In contrast to X-rays, MRI can detect preradiographic as well as radiographic OA in the knee and other joints [15, 34]; consequently, higher specificity and PPV were obtained when MRI knee assessment was used as the reference standard. In validation studies, PPV and NPV depend on the prevalence and severity of the disease. Thus, in addition, we have calculated positive and negative likelihood ratios, which are independent of the prevalence. On the basis of likelihood ratios, AOA2 might be useful in ruling in OA.
The limitations of the present study need to be acknowledged. First, we received written consent from 171 subjects to link their clinical data with the administrative records, which reduced the sample size. This reduction slightly changed the sample characteristics compared to those of the entire cohort . Second, some of these subjects were in the early stage of OA development. The recruitment period for subjects was 2002–2005, and their administrative histories were linked from 1991 to 2004. In an ideal situation, both clinical and administrative diagnoses should have been performed in the same calendar year. However, among the elderly with OA and other chronic diseases, the former often receives lower priority when they are assessed by a physician. Therefore, the number of OA cases covering 2-3 years of administrative records is expected to be lower than the actual number of cases. To minimize the number of undiagnosed OA cases we observed the medical history of these subjects from 1991 to 2004. We did not include administrative records after the clinical assessment to reduce false positives. Third, we used hip pain as a proxy variable for hip OA in the reference standards. Studies have shown that hip pain is considered to be the main feature of hip OA [35, 36]. The knee, hip, and hand are the most commonly affected joints [2, 6, 10–12] and studies have shown that individuals with OA in one joint are more likely to have the disease in other joints . By including hip OA cases based on hip pain, we added 1–11% additional OA cases to the reference standards, which may not overrepresent the actual hip OA cases. Fourth, OA in other locations, such as the foot, elbow, jaw, and shoulder, were not measured in the reference standards. This is unlikely to have a substantial effect on the validation results since the prevalence of OA in these locations is relatively low. Fifth, our study subjects were selected based on knee pain. Future validation studies of randomly selected subjects with symptomatic OA in any joint, as well as comparing clinical diagnoses of OA in all possible joints with administrative diagnoses, are needed. We have validated two commonly used case definitions of OA in this study. Validation studies focusing on other administrative definitions or algorithms for OA might be the subject of future studies.
The strengths of this study include the use of a representative clinical sample linked to administrative data. Our study featured a population-based cohort that included subjects with preradiographic as well as advanced radiographic knee OA. We compared two administrative OA definitions to the four reference standards. To our knowledge, this is the first study, to compare administrative case definitions and MRI-detected cartilage-based OA assessments. Administrative databases are frequently used in OA research. However, there are few validation studies of administrative OA diagnosis. The primary objective of selecting this study cohort was to assess MRI, X-rays, and symptomatic-based measures to detect early knee OA. In addition, symptomatic and self-reported data were collected for hand and hip OA, which enhances the present study. In a site-specific validation study focusing one joint at a time, the validation results may vary between sites. Since administrative diagnosis includes OA in any joint except the spine, our validation results are not affected by site-specific variations.
Population-based administrative data have great potential for facilitating investigations of OA occurrence as well as OA comorbidity and outcome research. However, the fundamental question to be addressed is whether the data are valid for such purposes. Our study addressed this question by comparing two case definitions with four reference standards. The next question to be addressed is which case definition should be applied for defining OA? It is noteworthy that the observed PPVs in both definitions were very high because the prevalence of OA was more than 70% based on the reference standards, whereas, in the general population, the prevalence of OA is 10–20%. The sensitivity of the definition that included one physician’s claim or hospital admission was 47–57%, and the specificity was 75–87%. This suggests that potential overreporting should be a concern in estimating the general population prevalence using this definition. On the other hand, the sensitivity of the definition that included at least two physician’s claims in two years or one hospital admission was 21–26%, and the specificity was 91–100%. This suggests that prevalence would likely be underreported using the latter definition. In addition, the observed specificity and the PPV in the latter case definition were higher than those in the former case definition, thus producing fewer false positives cases. The definition of at least two physician’s claims in two years or one hospital admission would, therefore, be more appropriate for studies in which avoiding false positives is critical, such as etiological research or studies assessing the effect of OA on other health conditions in the population.
In conclusion, the validity of OA diagnoses in administrative health records in British Columbia varied due to case definitions and reference standards. AOA2 is more suitable for identifying OA cases for research using this Canadian administrative database. Despite several limitations, we have validated two administrative case definitions wherein clinical and symptomatic diagnoses of knee, hand, and hip OA were included in the reference standards. Future validation studies, based on clinical diagnoses of all possible joints affected by OA, are needed. As the validation results may differ across administrative regions, further studies in different populations are needed to compare these results.
An earlier version of this work was presented as an abstract at OARSI (Osteoarthritis Research Society International) Annual Scientific Meeting in 2008.
The authors declare that they have no competing interests.
Dr. M. Mushfiqur Rahman acknowledged the Canadian Arthritis Network, The Arthritis Society, and the Canadian Institutes of Health Research for doctoral training awards. Dr. Jolanda Cibere acknowledged the Canadian Institutes of Health Research and The Arthritis Society for receiving grants for the population-based cohort study and for the Administrative Linkage Study.
- R. C. Lawrence, C. G. Helmick, F. C. Arnett et al., “Estimates of the prevalence of arthritis and selected musculoskeletal disorders in the United States,” Arthritis and Rheumatism, vol. 41, no. 5, pp. 778–799, 1998.
- R. C. Lawrence, D. T. Felson, C. G. Helmick et al., “Estimates of the prevalence of arthritis and other rheumatic conditions in the United States. Part II,” Arthritis and Rheumatism, vol. 58, no. 1, pp. 26–35, 2008.
- J. A. Kopec, M. M. Rahman, J.-M. Berthelot et al., “Descriptive epidemiology of osteoarthritis in British Columbia, Canada,” Journal of Rheumatology, vol. 34, no. 2, pp. 386–393, 2007.
- D. J. Hunter, “Osteoarthritis,” Best Practice & Research: Clinical Rheumatology, vol. 25, no. 6, pp. 801–814, 2011.
- D. D. Dunlop, L. M. Manheim, J. Song, and R. W. Chang, “Arthritis prevalence and activity limitations in older adults,” Arthritis and Rheumatism, vol. 44, no. 1, pp. 212–221, 2001.
- P. Suri, D. C. Morgenroth, and D. J. Hunter, “Epidemiology of osteoarthritis and associated comorbidities,” PM & R, vol. 4, no. 5, supplement, pp. S10–S19, 2012.
- D. T. Felson and Y. Zhang, “An update on the epidemiology of knee and hip osteoarthritis with a view to prevention,” Arthritis and Rheumatism, vol. 41, no. 8, pp. 1343–1355, 1998.
- D. T. Felson, R. C. Lawrence, M. C. Hochberg et al., “Osteoarthritis: new insights. Part 2: treatment approaches,” Annals of Internal Medicine, vol. 133, no. 9, pp. 726–737, 2000.
- J. Katz, “Total joint replacement in osteoarthritis,” Best Practice & Research Clinical Rheumatology, vol. 20, no. 1, pp. 145–153, 2006.
- M. Grotle, K. B. Hagen, B. Natvig, F. A. Dahl, and T. K. Kvien, “Prevalence and burden of osteoarthritis: results from a population survey in Norway,” Journal of Rheumatology, vol. 35, no. 4, pp. 677–684, 2008.
- J. M. Jordan, C. G. Helmick, J. B. Renner et al., “Prevalence of hip symptoms and radiographic and symptomatic hip osteoarthritis in African Americans and Caucasians: the Johnston County osteoarthritis project,” Journal of Rheumatology, vol. 36, no. 4, pp. 809–815, 2009.
- J. M. Jordan, C. G. Helmick, J. B. Renner et al., “Prevalence of knee symptoms and radiographic and symptomatic knee osteoarthritis in African Americans and Caucasians: The Johnston County Osteoarthritis Project,” Journal of Rheumatology, vol. 34, no. 1, pp. 172–180, 2007.
- J. H. Kellgren and J. S. Lawrence, “Radiological assessment of osteo-arthrosis,” Annals of the Rheumatic Diseases, vol. 16, no. 4, pp. 494–502, 1957.
- F. W. Roemer and A. Guermazi, “Osteoarthritis year 2012 in review: imaging,” Osteoarthritis and Cartilage, vol. 20, no. 12, pp. 1440–1446, 2012.
- D. Hayashi, F. W. Roemer, and A. Guermazi, “Osteoarthritis year 2011 in review: imaging in OA—a radiologists' perspective,” Osteoarthritis and Cartilage, vol. 20, no. 3, pp. 207–214, 2012.
- R. Altman, E. Asch, and D. Bloch, “Development of criteria for the classification and reporting of osteoarthritis. Classification of osteoarthritis of the knee,” Arthritis and Rheumatism, vol. 29, no. 8, pp. 1039–1049, 1986.
- R. Altman, G. Alarcón, D. Appelrouth et al., “The American College of Rheumatology criteria for the classification and reporting of osteoarthritis of the hand,” Arthritis & Rheumatism, vol. 33, no. 11, pp. 1601–1610, 1990.
- R. Altman, G. Alarcón, D. Appelrouth et al., “The American college of rheumatology criteria for the classification and reporting of osteoarthritis of the hip,” Arthritis and Rheumatism, vol. 34, no. 5, pp. 505–514, 1991.
- L. R. Harrold, R. A. Yood, S. E. Andrade et al., “Evaluating the predictive value of osteoarthritis diagnoses in an administrative database,” Arthritis and Rheumatism, vol. 43, no. 8, pp. 1881–1885, 2000.
- M. M. Rahman, J. A. Kopec, A. H. Anis, J. Cibere, and C. H. Goldsmith, “Risk of cardiovascular disease in patients with osteoarthritis: a prospective longitudinal study,” Arthritis Care and Research, vol. 65, no. 12, pp. 1951–1958, 2013.
- L. M. Lix, M. S. Yogendran, S. Y. Shaw, C. Burchill, C. Metge, and R. Bond, “Population-based data sources for chronic disease surveillance,” Chronic Diseases in Canada, vol. 29, no. 1, pp. 31–38, 2008.
- J. Cibere, H. Zhang, A. Thorne et al., “Association of clinical findings with pre-radiographic and radiographic knee osteoarthritis in a population-based study,” Arthritis care & research, vol. 62, no. 12, pp. 1691–1698, 2010.
- British Columbia Ministry of Health, Medical Services Plan (MSP) Payment Information File, Population Data BC. Data Extract, MOH, 2007, http://www.popdata.bc.ca/data.
- British Columbia Ministry of Health, PharmaCare. Population Data BC. Data Extract, MOH, 2007, http://www.popdata.bc.ca/data.
- British Columbia Ministry of Health, Discharge Abstract Database (Hospital Separations), Population Data BC, Data Extract, MOH, 2007, http://www.popdata.bc.ca/data.
- W. Zhang, A. Jones, and M. Doherty, “Does paracetamol (acetaminophen) reduce the pain of osteoarthritis?: a meta-analysis of randomised controlled trials,” Annals of the Rheumatic Diseases, vol. 63, no. 8, pp. 901–907, 2004.
- P. Creamer, M. Lethbridge-Cejku, and M. C. Hochberg, “Where does it hurt? Pain localization in osteoarthritis of the knee,” Osteoarthritis and Cartilage, vol. 6, no. 5, pp. 318–323, 1998.
- J. Cibere, N. Bellamy, A. Thorne et al., “Reliability of the knee examination in osteoarthritis: effect of standardization,” Arthritis and Rheumatism, vol. 50, no. 2, pp. 458–468, 2004.
- M. Kothari, A. Guermazi, G. Von Ingersleben et al., “Fixed-flexion radiography of the knee provides reproducible joint space width measurements in osteoarthritis,” European Radiology, vol. 14, no. 9, pp. 1568–1573, 2004.
- D. G. Disler, T. R. McCauley, C. G. Kelman et al., “Fat-suppressed three-dimensional spoiled gradient-echo MR imaging of hyaline cartilage defects in the knee: comparison with standard MR imaging and arthroscopy,” American Journal of Roentgenology, vol. 167, no. 1, pp. 127–132, 1996.
- J. Cibere, H. Zhang, P. Garnero et al., “Association of biomarkers with pre-radiographically defined and radiographically defined knee osteoarthritis in a population-based study,” Arthritis and Rheumatism, vol. 60, no. 5, pp. 1372–1380, 2009.
- K. J. Rothman, S. Greenland, and T. L. Lash, Modern Epidemiology, Lippincott Williams & Wilkins, Philadelphia, Pa, USA, 2008.
- R. Jaeschke, G. H. Guyatt, D. L. Sackett et al., “Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients?” JAMA, vol. 271, no. 9, pp. 703–707, 1994.
- M. F. Sower, C. Hayes, D. Jamadar et al., “Magnetic resonance-detected subchondral bone marrow and cartilage defect characteristics associated with pain and X-ray-defined knee osteoarthritis,” Osteoarthritis and Cartilage, vol. 11, no. 6, pp. 387–393, 2003.
- A. M. Lievense, B. W. Koes, J. A. N. Verhaar, A. M. Bohnen, and S. M. A. Bierma-Zeinstra, “Prognosis of hip pain in general practice: a prospective followup study,” Arthritis Care & Research, vol. 57, no. 8, pp. 1368–1374, 2007.
- A. A. Wright, C. Cook, and J. H. Abbott, “Variables associated with the progression of hip osteoarthritis: a systematic review,” Arthritis Care and Research, vol. 61, no. 7, pp. 925–936, 2009.
- E. C. Sayre, J. M. Jordan, J. Cibere et al., “Quantifying the association of radiographic osteoarthritis in knee or hip joints with other knees or hips: the Johnston county osteoarthritis project,” The Journal of Rheumatology, vol. 37, no. 6, pp. 1260–1265, 2010.
Copyright © 2016 M. Mushfiqur Rahman et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.