Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 741068 | 8 pages | https://doi.org/10.1155/2015/741068

Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

Academic Editor: Ricardo Aguilar-López
Received20 Oct 2014
Revised19 Mar 2015
Accepted19 Mar 2015
Published20 Apr 2015

Abstract

Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI). The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

1. Introduction

Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI). The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology [1]. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques [2]. Gestures are the physical movements of fingers, hands, arms, or body that carry special meaning that can be translated to interaction with the environment. There are many devices that can sense body positions, hand gestures, voice recognizers, facial expressions recognizer, and many other aspects of human actions that can be used as powerful HCI. Gesture recognition has many applications such as communication tool between hearing impaired and virtual reality applications and medical applications.

Hand gesture recognition techniques can be divided into two main categories: appearance based approaches and three-dimensional hand model based approaches [3, 4]. Appearance based approaches depend on features extracted from the model image to model the hand appearance. After that, all input frames from video streaming are compared with the extracted features to detect the correct gesture [5, 6]. Three-dimensional hand model based approaches convert the 3D image to 2D image by projection. Then, the hand features are estimated by comparing the estimated 2D images with the input images to detect the current 2D hand gesture [7, 8]. Generally appearance based approaches performance is better than 3D hand models performance in real-time detection but three-dimensional hand model based approaches offer a rich description that potentially allows a wide class of hand gestures [9].

Both hand shape and skin colour are very important features that are commonly used in hand gestures detection and tracking. Therefore, the accuracy can be increased and at the same time processing time can be decreased as the area of interest in the image is reduced for the hand gesture only. In [10], real-time skin colour model is developed to extract the region of interest (ROI) of the hand gesture. The algorithm is based on Haar-wavelet representation. The algorithm recognizes hand gestures based on database that contains all template gestures. During the recognition process, a measurement metric has been used to measure the similarity between the features of a test image and those in the database. The simulation results show improvement in detection rate, recognition time, and database size. Another real-time hand gesture recognition algorithm is introduced in [11]. The algorithm is a hybrid of hand segmentation, hand tracking, and multiscale feature extraction. The process of hand segmentation is implemented by taking the advantage of motion and colour indications during tracking. After that, multiscale feature extraction process is executed to find palm-finger. The palm-finger decomposition is used for gestures recognition. Experimental results show that the proposed algorithm has good detection rate for hand gestures with different aspect ratios and complicated backgrounds.

In [12], an automatic hand gesture detection and recognition algorithm is proposed. The detection process is based on Viola-Jones method [13]. Then, the feature vectors of Hu invariant moments [14] are extracted from the detected hand gesture. Finally, the extracted feature is used with the support vector machine (SVM) algorithm for hand gesture classification. Haar-like features and AdaBoost classifier are used in [15] to recognize hand gestures. Hand gesture recognition algorithm is developed in [16]. In this algorithm, hand gesture features are extracted based on the normalized moment of inertia and Hu invariant moments of gestures. As in [12], SVM algorithm is used as classifier for the hand gesture. A neural network model is used in [17] to recognize a hand posture in a given frame. A space discretization based on face location and body anthropometry is implemented to segment hand postures. The authors in [18] have proposed a new system for detecting and tracking bare hand in cluttered background. They use multiclass support vector machine (SVM) for classification and -means for clustering.

All of the aforementioned techniques are based on skin colour and are facing problem of extracting region of interest from the entire frame. The reason is that some objects such as human arm and face have colour similar to the hand. To solve this problem, modified 1$ algorithm [19] is used in this paper to extract hand gestures with high accuracy. Most of the shape descriptors are pixel based and the computational complexity is too high for real-time performance. While the 1$ algorithm is very straight forward with low computational complexity algorithm, it can be used in real-time detection. The training time is too small compared to one of the well-known techniques called Viola-Jones method [13].

Background subtraction has direct effect on the accuracy and computational complexity of hand gestures extraction algorithm [20, 21]. There are many challenges facing the design of a background subtraction algorithm such as light changes, shadows, overlapping of the objects in the visual area, and noise from camera movement [22]. The robust subtraction algorithm is one that considers all of these challenges with high accuracy and reasonable time complexity. Therefore, many research efforts have been proposed over the years to fill the need for robust background subtraction algorithms. The background subtraction algorithms can be classified into three groups: Mixture of Gaussian (MoG) [23, 24], Kernel Density Estimation (KDE) [2528], and Codebook (CB) [29, 30].

This paper proposes a system for hand gesture detection by using the average background algorithm [31] for background subtraction and the 1$ algorithm [19] for hand’s template matching. Five hand gestures are detected and translated into commands that can be used to control robot movements. The first contribution in the paper is the use of 1$ algorithm in hand gesture detection. To the best knowledge of the authors of this paper, this is the first time the 1$ algorithm was used in hand gesture detection. It is used in the literature in hand writing detection. The second contribution is the use of average background algorithm for background subtraction in hybrid with the 1$ algorithm. Therefore, the simulation results of the proposed system show that the hand gesture detection rate is 98% as well as the computational time complexity is also improved.

The rest of the paper is organized as follows. Section 2 discusses the proposed system components. The details of the background subtraction algorithm used in this paper are given in Section 3. Section 4 explains the contour extraction algorithm. The modified template matching algorithm is discussed in Section 5. Section 6 provides simulation results and performance of the proposed system. Section 7 gives the conclusions of the proposed system.

2. System Overview

Figure 1 shows the block diagram of the proposed system. This system consists of two main stages. The first stage is the background subtraction algorithm that is used to remove all static objects that reside in the background, and then extracting the region of interest that contains the hand gesture. The second stage is used for comparing the current hand gesture with the trained data using the 1$ algorithm and translating the detected gesture to a command. This command can be used to control robot movements in the virtual world through the Webots simulator [32].

3. Background Subtraction Algorithm

In this paper, the average background algorithm [31] is used to remove the background of the input frame. The moving objects are isolated from the whole image while the static objects are considered as part from the background of the frame. This technique is implemented by creating a model for the background and updating it continuously to take into account light changes, shadows, overlapping of the objects in the visual area, or the new added static objects. This model is considered as a reference frame that will be subtracted from the current frame to detect the moving objects. There are many algorithms [2, 33, 34] that can be used for background subtraction. This algorithm must have the ability to support multilevel illumination, detect all moving objects at different speeds, and consider any resident object as part of the background as soon as possible. Steps of background subtraction algorithms can be divided into four main stages [35], which are preprocessing, background modelling, foreground detection (referred to as background subtraction), and data validation (also known as postprocessing). The second stage is the modelling background stage which is also known as background maintenance. It is the main stage of any background subtraction algorithm.

In [36] the background modelling algorithms are classified into two main types: recursive and nonrecursive models. Nonrecursive models use buffer to store all previous frames and estimate the background from the temporal variation of each pixel in the buffer. Recursive algorithms are not depending on the previous frames such as frame differencing, median filter, and linear predictive filter. Recursive models require less storage than nonrecursive models. However, any error occurring in the recursive model can stay for longer time than nonrecursive models. On one hand, the most known nonrecursive techniques are frame differencing [37], average filter [24, 38], median filtering [39], minimum-maximum filter [40], linear predictive filter [41], and nonparametric modelling [42]. On the other hand, recursive techniques include approximated median filter [43], single Gaussian [44], Kalman filter [45], Mixture of Gaussians (MoG) [23, 24], clustering-based [46], and Hidden Markov Models (HMM) [47].

In this paper, the average filter technique [24, 38] is used for background modelling as in [48] to extract moving areas. The average filter creates an initial background from the first frames as given in (1). The moving areas are obtained by subtracting the current frame from the average background as given in (2). To get a better result a threshold filter is applied on the difference image to create a binary image as calculated in (3). The threshold is chosen to be 50, because the background colour model is brighter that human skin. The average background’s pixels are updated using (4) to remove any noise or any static objects. The parameter is used to control the speed of updating process; in other words it controls how quickly the average background is updated by the current frame. The value of varies from 0 to 1. The value 0.001 is used in this paper to decrease the learning speed. The steps of hand extraction are shown in Figures 2, 3, 4, and 5. Consider

4. Contour Extraction Algorithm

The contour is a curve that connects all points surrounding specific part of an image that has the same color or intensity. To identify hand gestures, the contours of all objects that exist in the threshold image are detected as shown in Figure 6. Then, the biggest contour, which has the biggest calculated area representing the hand’s area, is selected as shown in Figure 7. The Suzuki’s algorithm [49] is used to find the hand contour. The filtered contour points are approximated to another contour that has fewer numbers of points. The contour approximation is the process of finding key vertex points [50] and is used to speed up the calculations and then it consumes low memory size.

5. Template Matching Algorithm

The template matching algorithm [19], called 1$ algorithm, is mainly used in hand writing recognition by comparing the stored template with the user input. The 1$ algorithm supports many features such as scaling independent feature, rotation independent feature, requiring simple mathematical equations, achieving high detection rate, allowing user to train any number of gestures, and low resources consumption. All of these features make the 1$ a good choice for hand’s contour extraction.

In this paper, the template matching algorithm, 1$ algorithm, is modified to satisfy the real-time constraints to recognize hand gesture. This is achieved by comparing the saved contour templates and the biggest extracted contour from the current frame. The best matching template detected is representing the desired contour of the current frame. The modified algorithm is based on four main stages.

5.1. Rebuilding the Point Path

This feature makes the comparison operation independent of the number of points that have been saved in template contour at the training phase. Before the comparison operation, the algorithm rebuilds any stored contour template with points to another contour that is defined with equal spaced points. When the value of is too small, the precision ratio will decrease. As well as, when the value of is too big, the processing time will increase. As a result, the best value of should be . In this paper, the value of is chosen to be 80. Figure 8 shows different set of hands versus different point paths.

5.2. Rotation Based on Indicative Angle

In this stage the indicative angle (the angle between the center point of the gesture and the first point of the gesture) is calculated. After that, the gesture is rotated until this angle goes to zero. Figure 9 shows the hands with different angles.

5.3. Scale and Translate

At this step the algorithm scales all gestures to standard square. The scaling is nonuniform and is applied on all candidates and all templates . After finishing the scaling operation, each gesture will be translated to reference point. To simplify all operations, all points are translated to the origin point .

5.4. Find Optimal Angle for Best Matching

At this stage, all templates and all candidates have been rebuilt, rotated, scaled, and translated. Each candidate , for , is compared to each template using (5) to find the Euclidian distance between corresponding points. The template with least distance is chosen:

6. Experimental Result

The proposed system is implemented using C# programming language and is tested with Webots [32] simulator virtual environment with boebot robot as shown in Figure 10. The computer machine used in the experiments has the following specifications: AMD Quad-Core processor (FX 4-Core Processor Black Edition, 3.8 GHz).

In the experiments five gestures (forward, right, stop, left, and backward) are used to control robot movements as shown in Figure 11. The test video stream was grabbed from a web camera with a resolution of 320 × 240. The detection algorithm is connected to Webots robot simulator using socket programming and controls the movements of the robot in the virtual environment. The hand gestures have been recorded with different scales, rotations, and illuminations and with simple background (i.e., without any object in the background). The algorithm experiment and implementation can be found in [51].

The detection speed of the 1$ algorithm [19] reaches 0.04 minutes per gesture and the error rate increases if the background has objects with many edges or darker than skin colour as shown in Figure 12.

Table 1 shows the accuracy of the proposed hand gesture detection algorithm. As given in this table, the detection rate is 100% for the four gestures forward, backward, stop, and right. While the detection rate for left gesture is 93%, the average processing time is 455 ms on average and the average detection ratio is 98.6%.


Gesture nameDetection speed (ms)Testing data
(number of images)
Success detected

Forward420100100
Backward540100100
Stop430100100
Right435100100
Left45010093

Table 2 shows the performance of the proposed system versus some of pervious approaches in terms of number of postures, recognition time, recognition accuracy, frame resolution, number of testing images, scale, rotation, light changes, and background. The proposed algorithm outperforms all the other approaches in term of recognition accuracy which is 98.6%, whereas it has recognition time better than most the other approaches expect the system in [4]. In addition, performance of the proposed system is not affected with changes in scale, light, rotation, and background.


PaperNumber of posturesRecognition time
sec/frame
Recognition accuracyFrame resolutionNumber of testing imagesScaleRotationLight changesBackground

Proposed algorithm50.0498.60%320 × 240 500VariantVariantVariantCluttered
[10]150.494.89%160 × 12030Not discussedInvariantNot discussedwall
[11]60.09–0.1193.80%320 × 240195–221Not discussedNot discussedNot discussedCluttered
[12]30.133396.20%640 × 480130Invariant30 degreeVariantDifferent
[13]40.0390.00%320 × 240100Invariant15 degreeInvariantWhite wall
[16]80.06666796.90%640 × 480300VariantInvariantNot discussedNot discussed
[17]6Not discussed93.40%100 × 10057–76InvariantNot discussedInvariantwall
[17]6Not discussed76.10%100 × 10038–58InvariantNot discussedNot discussedCluttered
[18]100.01796.23%640 × 480
or any size
1000InvariantInvariantInvariantCluttered

7. Conclusions

This paper proposes a system for hand gesture detection by using the average background algorithm for background subtraction and the 1$ algorithm for hand’s template matching. The 1$ algorithm is modified to satisfy the real-time constraints to recognize hand gesture. Five hand gestures are detected and translated into commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve 98.6% detection rate and small recognition time under different light changes, scales, rotation, and background. Such hand gesture recognition system provides a robust solution in real-life HCI applications.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. V. I. Pavlovic, R. Sharma, and T. S. Huang, “Visual interpretation of hand gestures for human-computer interaction: a review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 677–695, 1997. View at: Publisher Site | Google Scholar
  2. Y. Benezeth, B. Emile, and C. Rosenberger, “Comparative study on foreground detection algorithms for human detection,” in Proceedings of the 4th International Conference on Image and Graphics (ICIG '07), pp. 661–666, Sichuan, China, August 2007. View at: Publisher Site | Google Scholar
  3. T. Chalidabhongse, K. Kim, D. Harwood, and L. Davis, “A perturbation method for evaluating background subtraction algorithms,” in Proceedings of the International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Nice, France, 2003. View at: Google Scholar
  4. P. Garg, N. Aggarwal, and S. Sofat, “Vision based hand gesture recognition,” World Academy of Science, Engineering and Technology, pp. 972–977, 2009. View at: Google Scholar
  5. C. C. Wang and K. C. Wang, “Hand posture recognition using adaboost with SIFT for human robot interaction,” in Recent Progress in Robotics: Viable Robotic Service to Human, 2008. View at: Google Scholar
  6. X. W. D. Xu, “Real-time dynamic gesture recognition system based on depth perception for robot navigation,” in Proceedings of the SIGITE International Conference on Robotics and Biomimetics, Guangzhou, China, December 2012. View at: Google Scholar
  7. B. Stenger, A. Thayananthan, P. H. S. Torr, and R. Cipolla, “Model-based hand tracking using a hierarchical Bayesian filter,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 9, pp. 1372–1384, 2006. View at: Publisher Site | Google Scholar
  8. A. El-Sawah, N. D. Georganas, and E. M. Petriu, “A prototype for 3-D hand tracking and posture estimation,” IEEE Transactions on Instrumentation and Measurement, vol. 57, no. 8, pp. 1627–1636, 2008. View at: Publisher Site | Google Scholar
  9. S. Ziauddin, M. Tahir, T. Madani, M. Awan, R. Waqar, and S. Khalid, “Wiimote squash: comparing DTW and WFM techniques for 3D gesture recognition,” The International Arab Journal of Information Technology, vol. 11, no. 4, 2014. View at: Google Scholar
  10. W. K. Chung, X. Wu, and Y. Xu, “A realtime hand gesture recognition based on haar wavelet representation,” in Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO '08), pp. 336–341, Bangkok, Thailand, February 2009. View at: Publisher Site | Google Scholar
  11. F. Yikai, W. Kongqiao, C. Jian, and L. Hanqing, “A real-time hand gesture recognition method,” in Proceedings of the IEEE International Conference onMultimedia and Expo (ICME '07), pp. 995–998, Beijing, China, July 2007. View at: Google Scholar
  12. L. Yun and Z. Peng, “An automatic hand gesture recognition system based on Viola-Jones method and SVMs,” in Proceedings of the 2nd International Workshop on Computer Science and Engineering (WCSE '09), pp. 72–76, October 2009. View at: Publisher Site | Google Scholar
  13. Q. Chen, N. D. Georganas, and E. M. Petriu, “Hand gesture recognition using Haar-like features and a stochastic context-free grammar,” IEEE Transactions on Instrumentation and Measurement, vol. 57, no. 8, pp. 1562–1571, 2008. View at: Publisher Site | Google Scholar
  14. M.-K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962. View at: Publisher Site | Google Scholar
  15. Q. Chen, N. Georganas, and E. Petriu, “Real-time vision-based hand gesture recognition using Haar-like features,” in Proceedings of the IEEE Instrumentation and Measurement Technology Conference Proceedings (IMTC '07), pp. 1–6, Warsaw, Poland, May 2007. View at: Publisher Site | Google Scholar
  16. Y. Ren and C. Gu, “Real-time hand gesture recognition based on vision,” in Proceedings of the 5th International Conference on E-Learning and Games, Edutainment, Changchun, China, 2010. View at: Google Scholar
  17. S. Marcel and O. Bernier, “Hand posture recognition in a body-face centered space,” in Proceedings of the International Gesture Workshop, Gif-sur-Yvette, France, 1999. View at: Google Scholar
  18. N. H. Dardas and N. D. Georganas, “Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques,” IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 11, pp. 3592–3607, 2011. View at: Publisher Site | Google Scholar
  19. J. O. Wobbrock, A. D. Wilson, and Y. Li, “Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes,” in Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology (UIST '07), pp. 159–168, ACM Press, Newport, RI, USA, October 2007. View at: Publisher Site | Google Scholar
  20. P. W. Power and A. S. Johann, “Understanding background mixture models for foreground segmentation,” in Proceedings of the Image and Vision Computing, Auckland, New Zealand, 2002. View at: Google Scholar
  21. S. Jiang and Y. Zhao, “Background extraction algorithm base on Partition Weighed Histogram,” in Proceedings of the 3rd IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC '12), pp. 433–437, September 2012. View at: Publisher Site | Google Scholar
  22. D. Das and S. Saharia, “Implementation and performance evaluation of background subtraction algorithms,” International Journal on Computational Science & Applications, vol. 4, no. 2, pp. 49–55, 2014. View at: Publisher Site | Google Scholar
  23. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), pp. 246–252, June 1999. View at: Google Scholar
  24. C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747–757, 2000. View at: Publisher Site | Google Scholar
  25. I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 809–830, 2000. View at: Publisher Site | Google Scholar
  26. A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of the IEEE, vol. 90, no. 7, pp. 1151–1162, 2002. View at: Publisher Site | Google Scholar
  27. A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. II302–II309, July 2004. View at: Google Scholar
  28. B. Han, D. Comaniciu, Y. Zhu, and L. S. Davis, “Sequential kernel density approximation and its application to real-time visual tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 7, pp. 1186–1197, 2008. View at: Publisher Site | Google Scholar
  29. K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, “Real-time foreground-background segmentation using codebook model,” Real-Time Imaging, vol. 11, no. 3, pp. 172–185, 2005. View at: Publisher Site | Google Scholar
  30. J.-M. Guo, C.-H. Hsia, M.-H. Shih, Y.-F. Liu, and J.-Y. Wu, “High speed multi-layer background subtraction,” in Proceedings of the 20th IEEE International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS '12), pp. 74–79, November 2012. View at: Publisher Site | Google Scholar
  31. H.-S. Park and K. Hyun, “Real-time hand gesture recognition for augmented screen using average background and CAMshift,” in Proceedings of the 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV '13), Incheon, Republic of Korea, February 2013. View at: Publisher Site | Google Scholar
  32. Webots, “Webots: robot simulation software,” http://www.cyberbotics.com/. View at: Google Scholar
  33. S.-C. S. Cheung and C. Kamath, “Robust background subtraction with foreground validation for urban traffic video,” EURASIP Journal on Applied Signal Processing, vol. 2005, no. 14, pp. 2330–2340, 2005. View at: Publisher Site | Google Scholar
  34. S. Mitra and T. Acharya, Data Mining: Multimedia, Soft Computing, and Bioinformatics, John Wiley & Sons, New York, NY, USA, 2003.
  35. S. Elhabian, K. El-Sayed, and S. Ahmed, “Moving object detection in spatial domain using background removal techniques—state-of-art,” Recent Patents on Computer Science, vol. 1, pp. 32–54, 2008. View at: Google Scholar
  36. S.-C. S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” in Visual Communications and Image Processing, S. Panchanathan and B. Vasudev, Eds., vol. 5308 of Proceedings of SPIE, pp. 881–892, San Jose, Calif, USA, January 2004. View at: Publisher Site | Google Scholar
  37. R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Transactions on Image Processing, vol. 14, no. 3, pp. 294–307, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  38. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), vol. 2, IEEE, Fort Collins, Colo, USA, June 1999. View at: Publisher Site | Google Scholar
  39. R. Cutler and L. Davis, “View-based detection and analysis of periodic motion,” in Proceedings of the International Conference on Pattern Recognition, Brisbane, Australia, 1998. View at: Google Scholar
  40. I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: who? when? where? what? a real time system for detecting and tracking people,” in Proceedings of the 3rd Face and Gesture Recognition Conference, 1998. View at: Google Scholar
  41. K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” in Proceedings of the 7th International Conference on Computer Vision (ICCV '99), vol. 1, pp. 255–261, Kerkyra, Greece, September 1999. View at: Publisher Site | Google Scholar
  42. R. O. Duda, D. G. Stork, and P. E. Hart, Pattern Classification, John Wiley & Sons, New York, NY, USA, 2000.
  43. A. Elgammal, R. Duraiswami, and L. S. Davis, “Efficient kernel density estimation using the fast Gauss transform with applications to color modeling and tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 11, pp. 1499–1504, 2003. View at: Publisher Site | Google Scholar
  44. C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “P finder: real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780–785, 1997. View at: Publisher Site | Google Scholar
  45. S. Jabri, Z. Duric, H. Wechsler, and A. Rosenfeld, “Detection and location of people in video images using adaptive fusion of color and edge information,” in Proceedings of the 15th International Conference on Pattern Cecognition (ICPR '00), pp. 627–630, 2000. View at: Google Scholar
  46. D. Butler, S. Sridharan, and V. Bove, “Real-time adaptive background segmentation,” in Proceedings of the IEEE International Conference on Accoustics, Speech, and Signal Processing (ICASSP '03), pp. 349–352, April 2003. View at: Google Scholar
  47. L. R. Rabiner, “Tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–285, 1989. View at: Publisher Site | Google Scholar
  48. H. Park and K. Hyun, “Real-time hand gesture recognition for augmented screen using average background and camshift,” in Proceedings of the 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV '13), pp. 18–21, Incheon, Republic of Korea, January-February 2013. View at: Publisher Site | Google Scholar
  49. S. Suzuki and K. Be, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics and Image Processing, vol. 30, no. 1, pp. 32–46, 1985. View at: Publisher Site | Google Scholar
  50. D. Douglas and T. Peucker, “Algorithms for the reduction of the number of points required to represent a digitized line or its caricature,” Canadian Cartographer, vol. 10, pp. 112–122, 1973. View at: Google Scholar
  51. H. Khalid, “Proposed system implementation,” http://www.youtube.com/watch?v=U3qiZ6ZUWQE. View at: Google Scholar

Copyright © 2015 Hazem Khaled et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1719 Views | 549 Downloads | 4 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder