The objective of this research effort is to integrate therapy instruction with child-robot play interaction in order to better assess upper-arm rehabilitation. Using computer vision techniques such as Motion History Imaging (MHI), edge detection, and Random Sample Consensus (RANSAC), movements can be quantified through robot observation. In addition, incorporating prior knowledge regarding exercise data, physical therapeutic metrics, and novel approaches, a mapping to therapist instructions can be created allowing robotic feedback and intelligent interaction. The results are compared with ground truth data retrieved via the Trimble 5606 Robotic Total Station and visual experts for the purpose of assessing the efficiency of this approach. We performed a series of upper-arm exercises with two male subjects, which were captured via a simple webcam. The specific exercises involved adduction and abduction and lateral and medial movements. The analysis shows that our algorithmic results compare closely to the results obtain from the ground truth data, with an average algorithmic error is less than 9% for the range of motion and less than 8% for the peak angular velocity of each subject.