In particular directions of future work that could

We compared our results with a ground truth manual annotation of both full featured films in order to obtain quantitative performance information of the our algorithm, such as the sensitivity and specificity. For each of the films in Fig. 15, the manual annotation of the 5 types of human actions shown are included in the training and on top the result of our automatic annotation produced by our system. In the case of the first film “Route-66”, the figure shows a scene from the film that A 205804 our system correctly detected correctly a “walking” action shot, while from the movie “Valkaama”, we show a particular results of “drinking” and “picking up” actions. For each of the actions defined in the study, the results of TPR and TNR are one gene shown in the Table 5. The analyses were made by dividing the actions into groups, in the same way as we explained previously for experiments with the MILE/KTH human movement dataset. Each analysis consists of two groups, (1) the action in question (2) any other action not considered in the study.