How does this trajectory information

We carried out experimental tests with a training set consisting of 64 video shots (8 people, 4 human actions, and 2 video shots for each person): boxing (c1c1), greeting (c2c2), jogging (c3c3) and playing tennis (c4c4). For controls in our analysis, we also considered two cases: (1) a null action, defined as a scene without a human action, and (2) a non-defined action, which R547 other actions not considered in the training set. In the case of a null action, the resulting trajectories in the PCA eigenspace are concentrated close to the origin.
Fig. 11 shows a comparison of both the linear and polynomial kernel PCA applied to one of the four classes training discussed above. The example shows the spaces formed with two, three and four separate human action classes, each represented by a single video shot and a single person. The results demonstrate that we can achieve a better separation between the different classes from the KPCA, than can be obtained from the linear-PCA. Indeed, by fine tuning the kernel function parameters, we can control the class separation, which ultimately can lead to improved classification performance of the algorithm.