- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Electrical and Computer Engineering
Volume 2013 (2013), Article ID 837275, 12 pages
Online Detection of Abnormal Events in Video Streams
1Institut Charles Delaunay, LM2S-UMR STMR 6279 CNRS, University of Technology of Troyes, 10004 Troyes, France
2Observatoire de la Côte d'Azur, UMR 7293 CNRS, University of Nice Sophia-Antipolis, 06108 Nice, France
Received 19 September 2013; Accepted 12 November 2013
Academic Editor: Yi Zhou
Copyright © 2013 Tian Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We propose an algorithm to handle the problem of detecting abnormal events, which is a challenging but important subject in video surveillance. The algorithm consists of an image descriptor and online nonlinear classification method. We introduce the covariance matrix of the optical flow and image intensity as a descriptor encoding moving information. The nonlinear online support vector machine (SVM) firstly learns a limited set of the training frames to provide a basic reference model then updates the model and detects abnormal events in the current frame. We finally apply the method to detect abnormal events on a benchmark video surveillance dataset to demonstrate the effectiveness of the proposed technique.
Visual surveillance is one of the major research areas in computer vision. In a crowd image analysis problem, the scientific challenge includes abnormal events detection. For instance, Figure 1(a) illustrates a normal scene where the people are walking. In Figure 1(b), all the people are suddenly running in different directions. This dataset imitates panic-driven scenes.
Trajectory analysis of objects was described in [1–3]. The moving object was labeled by a blob in consecutive frames, and then a trajectory was produced. The deviation from the learnt trajectories was defined as abnormal events. Tracking based approaches are suitable for the sparse scenes with a few objects. The target might be lost due to occlusion.
In [4, 5], abnormal detection approaches which used features encoding motion, texture, and size of the objects were introduced. Local image regions in a video were analyzed by employing background subtraction method; then a dynamic Bayesian network (DBN) was constructed to model normal and abnormal behavior, and finally a likelihood ratio test was applied to detect abnormal behaviors. In , a space-time Markov random field (MRF) model which detected abnormal activities in a video was proposed, mixture of probabilistic principal component analyzers (MPPCA) was adopted to model local optical flow. The prediction is based on probabilistic assumption techniques where an accurate model exists, but there are various situations where a robust and tractable model cannot be obtained; model-free methods are needed to be studied.
Spatiotemporal motion features described by the context of bag of video words were adopted to detect abnormal events. In , the authors presented an algorithm which monitored optical flow in a set of fixed spatial positions, and constructed a histogram of optical flow. The likelihood of the behavior in a new coming frame concerning the probability distribution of the statistically learning behavior was computed. If the likelihood fell below a preset threshold, the behavior was considered as abnormal. In , irregular behavior of images or videos was detected by an inference process in a probabilistic graphical model. In [9, 10], the video pixels were densely sampled to form the feature. These methods are based on the partial information of images, such as small blocks in a frame, without fully exploiting the global information of the feature. In [11–13], spatiotemporal features modeled motion regions of the frame as background, and anomaly was detected by subtracting the newly sample to the background template. These works are similar to the change detection method when the background is not stable.
In this paper, the proposed algorithm is composed of two parts. Firstly, a covariance feature descriptor is constructed over the whole video frame, and then a nonlinear one-class support vector machine algorithm is applied in an online fashion in order to detect abnormal events. The features are extracted based on the optical flow which presents the movement information. Experiments of real surveillance video dataset show that our online abnormal detection techniques can obtain satisfactory performance. The rest of the paper is organized as follows. In Section 2, covariance matrix descriptor of motion feature is introduced. In Section 3, the online one-class SVM classification method is presented. In Section 4, two abnormal detection strategies based on online nonlinear one-class SVM are proposed. In Section 5, we present results of real-world video scenes. Finally, Section 6 concludes the paper.
2. Covariance Descriptor of Frame Behavior
The optical flow is a feature which presents the direction and the amplitude of a movement. It can provide important information about the spatial arrangement of the objects and the change rate of this arrangement . We adopt Horn-Schunck (HS) optical flow computation method in our work. The optical flow of the gray scale image is formulated as the minimizer of the following global energy functional: where is the intensity of the image, , , and are the derivatives of the image intensity value along the , , and time dimension, and are the components of the optical flow in the horizontal and vertical direction, and represents the weight of the regularization term.
We introduce the covariance matrix encoding the optical flow and intensity of each frame as the descriptor to represent the movement. The covariance feature descriptor is originally proposed by Tuzel et al.  for pattern matching in a target tracking problem. The descriptor is defined as where is the color information of an image (which can be gray, RGB, HSV, HLS, etc.), is a mapping relating the image with the th feature from the image, is the dimensional feature extracted from image , and are the image width and image height, and is the number of chosen features. For each frame, the feature can be represented as covariance matrix: where is the number of the pixels sampled in the frame, is the feature vector of pixel , is the mean of all the selected points, and is the covariance matrix of the feature vector . The covariance descriptor of each frame dose not have any information regarding the sample ordering and the number of points . Because the feature can be designed as different approaches to fuse features, the covariance matrix descriptor proposes a way to merge multiple parameters. Different choices of feature vectors extraction are shown in Table 1, where is the intensity of the gray image, and are horizontal and vertical components of optical flow, , , and are the first derivatives of the intensity, horizontal optical flow, and vertical optical flow in the direction respectively, , , and are the first derivatives of the corresponding feature in the direction and respectively, , , and are the second derivatives in direction, , , and are the second derivatives in direction. The flowchart of covariance matrix descriptor computation is shown in Figure 2. The optical flow and corresponding partial derivative characterize the interframe information or can be regarded as the movement information. The intensity of the frame and partial derivative of the intensity describe the intraframe information; they encode the appearance information of the frame.
If proper parameters are given, the traditionally used kernel, such as Gaussian, polynomial, and sigmoidal kernel, has similar performances . Gaussian kernel is chosen for our spatial features. The covariance matrix is an element in Lie group; the Gaussian kernel on the Euclidean spaces is not suitable for the covariance descriptors. The Gaussian kernel in Lie Group is defined as [20, 21]: where and are matrices in Lie Group .
3. Online One-Class SVM
The essence of an abnormal detection problem is that only normal scene samples are available. The one-class SVM framework is well suitable to an abnormal detection problem. Support vector machine (SVM) is initially proposed by Vapnik and Lerner [22, 23]. It is a method based on statistical learning theory and has fine performance to classify data and recognize patterns. There are two frameworks of one-class SVM, one is support vector data description (SVDD) which is presented in [24, 25] and the other is -support vector classifier (-SVC) introduced in . The SVDD formulation is adopted in our work. It computes a sphere shaped decision boundary with minimal volume around a set of objects. The center of the sphere and the radius are to be determined via the following optimization problem: where is the number of training samples and is the slack variable for penalizing the outliers. The hyperparameter is the weight for restraining slack variables; it tunes the number of acceptable outliers. The nonlinear function maps a datum into the feature space ; it allows to solve a nonlinear classification problem by designing a linear classifier in the feature space . is the kernel function for computing dot products in , . By introducing Lagrange multipliers, the dual problem associated with (6) is written by the following quadratic optimization problem: The decision function is
For the large training data, the solution cannot be obtained easily, and an online strategy to train the data is used in our work. Let denotes a sparse model of the center by using a small subset of available samples which is called dictionary: where , and let denote the cardinality of this subset . The distance of a mapped datum with respect to the center can be calculated by A modification of the original formulation of the one-class classification algorithm that consists of minimizing the approximation error is [27, 28] The final solution is given by where is the Gram matrix with ()th entry and is the column vector with entries , .
In the online scheme, at each time step there is a new sample. Let denote the coefficients, denote the Gram matrix, and denote the vector, at time step . A criterion is used to determine whether the new sample can be included into the dictionary. A threshold is preset, for the datum at time step , the coherence-based sparsification criterion [29, 30] is
First Case . In this case, the new data is not included into the dictionary. The Gram matrix . changes online: where is the column vector with entries , .
Second Case . In this case, the new data is included into the dictionary . The Gram matrix changes: By using Woodbury matrix identity can be calculated iteratively: The vector is updated from , with Computing as (19) needs to save all the samples in memory. For conquering this issue, it can compute as by considering an instant estimation. The update of from is Based on (20), we have an online implementation of the one-class SVM learning phase.
4. Abnormal Events Detection
In an abnormal event detection problem, it is assumed that a set of training frames (the positive class) describing the normal behavior is obtained. The general architectures of abnormal detection are introduced below.
The offline training strategy refers to the case where all the training samples are learnt as one batch, as shown in Figure 3(a). We propose two abnormal detection strategies; the difference between these two strategies is the time when the dictionary is fixed. These two strategies are shown in Figures 3(b) and 3(c). Strategy 1 is shown in Figure 3(b). The training data are learnt one by one. When the training period is finished, the dictionary and the classifier are fixed. Each test datum is classified based on the dictionary. Figure 3(c) illustrates Strategy 2. The training procedure is the same as Strategy 1. But in the testing period, the dictionary is updated if the datum satisfies the dictionary update condition. The details of these two strategies are explained in the following.
4.1. Strategy 1
In Strategy 1, the dictionary is updated merely through the training period.
Step 1. The first step is calculating the covariance matrix descriptor of training frames based on the image intensity and the optical flow. This step can be generalized as where are the image intensity and the corresponding optical flow of the 1st to th frame. are the covariance matrix descriptors.
Step 2. The second step consists of applying one-class SVM on the small subset of extracted descriptor of the training normal frames to obtain the support vectors. Consider a subset , of data selected from the full training sample set , without loss of generality, and assume that the first examples are chosen. This set of examples is called dictionary : where the set is the first covariance matrix descriptors of the training frames; it is the original dictionary . In one-class SVM, the majority of the training samples do not contribute to the definition of the decision function. The entries of a monitory subset of the training samples, , , are support vectors contributing to the definition of the decision function.
Step 3. After learning the dictionary which includes the first , samples, the training samples are learned online via the technique described in Section 3. This step can be generalized aswhere is the dictionary obtained through Step 2, is a new sample in the remaining training dataset. According to the criterion introduced in Section 3, if the new sample satisfies the dictionary updated condition, it will be included into the dictionary .
Step 4. Based on the dictionary and the classifier obtained from the training frames, the incoming frame sample is classified. The workflow of Strategy 1 is shown in Figure 4 and described by the following equation: where is the covariance matrix descriptor of the th frame needed to be classified and and are the samples of the dictionary . “” corresponds to the normal frame; “” corresponds to the abnormal frame.
4.2. Strategy 2
In this strategy, the dictionary is updated through both training and testing periods. The feature extraction step (Step 1) and the online training steps (Steps 2 and 3) are the same as the ones presented in Strategy 1. The testing step is different. The new coming datum which is detected as normal but satisfies dictionary update condition should be included into . The dictionary is needed to be updated through the testing period to include new samples.
Step 4: Strategy 2. If the incoming frame sample is classified as normal (), the data is checked by the criterion described in Section 3. When the data satisfies the dictionary update criterion, this testing sample will be included into the dictionary. This step can be generalized by the following equation:
5. Abnormal Detection Result
This section presents the results of experiments conducted to analyze the performance of the proposed method. A competitive performance through both Strategy 1 and Strategy 2 of UMN  dataset is presented.
5.1. Abnormal Visual Events Detection: Strategy 1
The results of the proposed abnormal events detection method via Strategy 1 online one-class SVM of UMN  dataset are shown below.
The UMN dataset includes eleven video sequences of three different scenes (the lawn, indoor, and plaza) of crowded escape events. The normal samples for training or for normal testing are the frames where the people are walking in different directions. The samples for abnormal testing are the frames where people are running. The detection results of the lawn scene, indoor scene, and plaza scene are shown in Figures 5, 6, and 7, respectively. Gaussian kernel for the Lie Group is used in these three scenes. Different values of and penalty factor are chosen; the area under the ROC curve is shown as a function of these parameters . The results show that taking covariance matrix as descriptor can obtain satisfactory performance for abnormal detection. And also, training the samples online can obtain similarly detection performance as training all the samples offline. Online one-class SVM is appropriate to detect abnormal visual events. There are frames in the lawn scene, and normal frames are used for training. In the offline strategy, all the frames covariance matrices should be saved in the memory. In Strategy 1, 100 frames covariance matrices are considered as the dictionary firstly. When feature is adopted to construct the covariance descriptor, the variance of Gaussian kernel is , the preset threshold of the criterion is , the dictionary size increases from to , and the maximum accuracy of the detection results is . In the indoor scene, there are 2975 normal frames and 1057 abnormal frames. In the plaza scene, there are 1831 normal frames and 286 abnormal frames. The processes of the experiments are similar to the ones of the lawn scene. When feature vector is , , , the dictionary size of these two scenes remain . The online strategy keeps the memory size almost unchanged when the size of training dataset increases.
5.2. Abnormal Visual Events Detection: Strategy 2
The results of the abnormal event detection method via Strategy 2 of UMN dataset are shown as follows. In the experiment process of the lawn scene, 100 normal samples from the training samples are learnt firstly, and then the other 380 training data are learnt online one by one. After these two training steps, we can obtain the basic dictionary from the training samples and also the classifier. In the following testing step, the dictionary is updated if the sample satisfies the dictionary update criterion. When a new sample is coming, it is firstly detected by the previous classifier. If it is classified as anomaly, the dictionary and the classifier are not changed. Otherwise, if the sample is classified as a normal one, the sparse criterion introduced in Section 3 is used to check the correlation between the earlier dictionary and this new datum. It will be included into the dictionary when it satisfied the update condition. The dictionary will be updated through the whole testing period. The other two scenes, the indoor and plaza scene, are handled by the same methods. When feature is adopted, the variance of the Gaussian kernel is , and the preset threshold of the criterion is , and the dictionary size of the lawn, indoor, and plaza scene is increased from 100 to 106, 102, and 102, respectively. The ROC curve of detection results of these three scenes is shown in Figures 8(a), 8(b), and 8(c). Besides the merit of saving memory of Strategy 1, Strategy 2 also has the advantage of adaptation to the long duration sequence.
The results performances of offline strategy, Strategy 1, and Strategy 2 are shown in Table 2. The performances of these two strategies results are similar to that of the results when all training samples are learnt together. When or are chosen as the features to form covariance matrix descriptor, the results have the best performance. These two features are more abundant to include movement and intensity information.
The result performances of the covariance matrix descriptor based online one-class SVM method and the state-of-the-art methods are shown in Table 3. The covariance matrix based online abnormal frame detection method obtains competitive performance. In general, our method is better than others except sparse reconstruction cost (SRC)  in lawn scene and indoor scene. In that paper, multiscale HOF is taken as a feature, and a testing sample is classified by its sparse reconstructor cost, through a weighted linear reconstruction of the overcomplete normal basis set. But computation of the HOF might takes more time than calculating covariance. By adopting the integral image , the covariance matrix descriptor of the subimage can be computed conveniently. So the covariance descriptor can be appropriately used to analyze the partial movement.
A method for the abnormal event detection of the frame is proposed. The method consists of covariance matrix descriptor encoding the movement features, and the online nonlinear one-class SVM classification method. We have developed two nonlinear one-class SVM based abnormal event detection techniques that update the normal models of the surveillance video data in an online framework. The proposed algorithm has been tested on a video dataset yielding successful results to detect abnormal events.
This work is partially supported by China Scholarship Council of Chinese Government, SURECAP CPER Project funded by Région Champagne-Ardenne, and CAPSEC CRCA FEDER platform.
- F. Jiang, J. Yuan, S. A. Tsaftaris, and A. K. Katsaggelos, “Anomalous video event detection using spatiotemporal context,” Computer Vision and Image Understanding, vol. 115, no. 3, pp. 323–333, 2011.
- C. Piciarelli, C. Micheloni, and G. L. Foresti, “Trajectory-based anomalous event detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 11, pp. 1544–1554, 2008.
- C. Piciarelli and G. L. Foresti, “On-line trajectory clustering for anomalous events detection,” Pattern Recognition Letters, vol. 27, no. 15, pp. 1835–1842, 2006.
- T. Xiang and S. Gong, “Incremental and adaptive abnormal behaviour detection,” Computer Vision and Image Understanding, vol. 111, no. 1, pp. 59–73, 2008.
- S. Gong and T. Xiang, “Recognition of group activities using dynamic probabilistic networks,” in Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV '03), pp. 742–749, October 2003.
- J. Kim and K. Grauman, “Observe locally, infer globally: a space-time MRF for detecting abnormal activities with incremental updates,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09), pp. 2921–2928, Miami, Fla, USA, June 2009.
- A. Adam, E. Rivlin, I. Shimshoni, and D. Reinitz, “Robust real-time unusual event detection using multiple fixed-location monitors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 555–560, 2008.
- O. Boiman and M. Irani, “Detecting irregularities in images and in video,” International Journal of Computer Vision, vol. 74, no. 1, pp. 17–31, 2007.
- X. Zhu, Z. Liu, and J. Zhang, “Human activity clustering for online anomaly detection,” Journal of Computers, vol. 6, no. 6, pp. 1071–1079, 2011.
- X. Zhu and Z. Liu, “Human behavior clustering for anomaly detection,” Frontiers of Computer Science in China, vol. 5, no. 3, pp. 279–289, 2011.
- Y. Benezeth, P. M. Jodoin, and V. Saligrama, “Abnormality detection using low-level co-occurring events,” Pattern Recognition Letters, vol. 32, no. 3, pp. 423–431, 2011.
- Y. Benezeth, P. M. Jodoin, V. Saligrama, and C. Rosenberger, “Abnormal events detection based on spatio-temporal co-occurences,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09), pp. 2458–2465, Miami, Fla, USA, June 2009.
- A. Mittal, A. Monnet, and N. Paragios, “Scene modeling and change detection in dynamic scenes: a subspace approach,” Computer Vision and Image Understanding, vol. 113, no. 1, pp. 63–79, 2009.
- B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1–3, pp. 185–203, 1981.
- O. Tuzel, F. Porikli, and P. Meer, “Region covariance: a fast descriptor for detection and classification,” in Computer Vision—ECCV 2006, vol. 3952 of Lecture Notes in Computer Science, pp. 589–600, Springer, New York, NY, USA, 2006.
- R. Mehran, A. Oyama, and M. Shah, “Abnormal crowd behavior detection using social force model,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09), pp. 935–942, Miami, Fla, USA, June 2009.
- Y. Cong, J. Yuan, and J. Liu, “Sparse reconstruction cost for abnormal event detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 3449–3456, Providence, RI, USA, June 2011.
- Y. Shi, Y. Gao, and R. Wang, “Real-time abnormal event detection in complicated scenes,” in Proceedings of the 20th International Conference on Pattern Recognition (ICPR '10), pp. 3653–3656, Istanbul, Turkey, August 2010.
- B. Schölkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, MIT Press, Boston, Mass, USA, 2002.
- B. Hall, Lie Groups, Lie Algebras, And Representations: An Elementary Introduction, vol. 222, Springer, New York, NY, USA, 2003.
- C. Gao, F. Li, and C. Shen, “Research on lie group kernel learning algorithm,” Journal of Frontiers of Compter Science and Technology, vol. 6, pp. 1026–1038, 2012.
- V. N. Vapnik and A. Lerner, “Pattern recognition using generalized portrait method,” Automation and Remote Control, vol. 24, pp. 774–780, 1963.
- V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 2000.
- D. Tax, One-class classification [Ph.D. thesis], Delft University of Technology, 2001.
- D. M. Tax and R. P. Duin, “Data domain description using support vectors,” in Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN '99), vol. 99, pp. 251–256, 1999.
- B. Schölkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson, “Estimating the support of a high-dimensional distribution,” Neural Computation, vol. 13, no. 7, pp. 1443–1471, 2001.
- Z. Noumir, P. Honeine, and C. Richard, “Online one-class machines based on the coherence criterion,” in Proceedings of the 20th European Signal Processing Conference (EUSIPCO '12), pp. 664–668, 2012.
- Z. Noumir, P. Honeine, and C. Richard, “One-class machines based on the coherence criterion,” in Proceedings of the IEEE Statistical Signal Processing Workshop (SSP '12), pp. 600–603, 2012.
- P. Honeine, “Online kernel principal component analysis: a reducedorder model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1814–1826, 2012.
- C. Richard, J. C. M. Bermudez, and P. Honeine, “Online prediction of time series data with kernels,” IEEE Transactions on Signal Processing, vol. 57, no. 3, pp. 1058–1067, 2009.
- UMN, “Unusual crowd activity dataset of university of Minnesota,” 2006, http://mha.cs.umn.edu/proj_events.shtml.
- J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology, vol. 143, no. 1, pp. 29–36, 1982.