Table of Contents Author Guidelines Submit a Manuscript
Journal of Sports Medicine
Volume 2013, Article ID 483503, 5 pages
Research Article

Interrater and Intrarater Reliability of the Tuck Jump Assessment by Health Professionals of Varied Educational Backgrounds

1North Country HealthCare, 301 South 7th Street, Williams, AZ 86046, USA
2Proactive Physical Therapy, 3945 East Paradise Falls Drive No. 109 Tucson, AZ 85712, USA
3Department of Physical Therapy and Athletic Training, Northern Arizona University, P.O. Box 15105, Flagstaff, AZ 86011, USA
4Athletic Training Department, Daemen College, 4380 Main Street, Amherst, NY 14226-3592, USA
5DeRosa Physical Therapy, 1301 West University Avenue, Flagstaff, AZ 86001, USA

Received 29 June 2013; Accepted 21 November 2013

Academic Editor: Ryosuke Shigematsu

Copyright © 2013 Lisa A. Dudley et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Objective. The Tuck Jump Assessment (TJA), a clinical plyometric assessment, identifies 10 jumping and landing technique flaws. The study objective was to investigate TJA interrater and intrarater reliability with raters of different educational and clinical backgrounds. Methods. 40 participants were video recorded performing the TJA using published protocol and instructions. Five raters of varied educational and clinical backgrounds scored the TJA. Each score of the 10 technique flaws was summed for the total TJA score. Approximately one month later, 3 raters scored the videos again. Intraclass correlation coefficients determined interrater (5 and 3 raters for first and second session, resp.) and intrarater (3 raters) reliability. Results. Interrater reliability with 5 raters was poor (ICC = 0.47; 95% confidence intervals (CI) 0.33–0.62). Interrater reliability between 3 raters who completed 2 scoring sessions improved from 0.52 (95% CI 0.35–0.68) for session one to 0.69 (95% CI 0.55–0.81) for session two. Intrarater reliability was poor to moderate, ranging from 0.44 (95% CI 0.22–0.68) to 0.72 (95% CI 0.55–0.84). Conclusion. Published protocol and training of raters were insufficient to allow consistent TJA scoring. There may be a learned effect with the TJA since interrater reliability improved with repetition. TJA instructions and training should be modified and enhanced before clinical implementation.