Research Article

A Kinect-Based Physiotherapy and Assessment Platform for Parkinson’s Disease Patients

Figure 3

(a) Exercise 1 is an arm stretching/strengthening exercise executed from a standing position. The included screenshot is a snapshot of the screen in its most informative mode. Several regions of information are identified as follows. Region “A” is the number id of the exercise being executed. Region “B” shows four key snapshots of a trainer performing the exercise. Region “C” is an insert of Kinect’s RGB camera view into the platform, depicting an actor performing the exercise in real time in front of the sensor. The red dots superimposed on the actor are skeleton joints whose 3D coordinates are being calculated in real time and in the current frame are projected on the 2D frontal vertical plane as guiding digital artefacts. Region “D” lists three groups of variables used to minimize the effect of random joint positional errors arising from the hardware and to detect macroscopic movements accurately. Region “E” shows the number of detected repetitions for each arm (in the screenshot, the actor has completed seven repetitions for each arm and is working on the next repetition). Regions “F” and “G” show debugging information that is useful to the developers to effectively fine-tune the parameters in region “D”. Finally, the menu button in region “H” takes us back to the main menu shown in Figure 2. It is worth mentioning that the regions visible to the patient in normal (nondebugging) operation are A, B, C, E, and H. (b) A snapshot of an agent performing Exercise 1 in front of a 25′′ monitor at a distance of approximately 2.5 m from the Kinect sensor (placed to the left of the monitor). In actual deployment, a much larger 55–58′′ monitor would be preferred.
(a)
(b)