Research Article

Adaptive Self-Occlusion Behavior Recognition Based on pLSA

Figure 4

Sample frames from our datasets. The action labels in each dataset are as follows (a) KTH data set: walking (a1), jogging (a2), running (a3), boxing (a4), and handclapping (a5); (b) Weizmann data set: running, walking, jumping-jack, waving-two-hands, waving-one-hand, and bending; (c) HumanEva dataset: walking(a1), jogging (a2), gestures (a3), boxing (a4), and combo (a5). Each motion is performed by four subjects and recorded by seven cameras (three RGB and four gray scale cameras) with the ground truth data of human joints.
506752.fig.004a
(a)
506752.fig.004b
(b)
506752.fig.004c
(c)