Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2018 (2018), Article ID 4265352, 11 pages
https://doi.org/10.1155/2018/4265352
Research Article

A Study on Tracking and Augmentation in Mobile AR for e-Leisure

1BioComputing Lab, Department of Computer Science and Engineering, Korea University of Technology and Education (KOREATECH), Cheonan, Republic of Korea
2Creative Content Research Laboratory, ETRI, Daejeon, Republic of Korea
3Institute for Bioengineering Application Technology, Korea University of Technology and Education (KOREATECH), Cheonan, Republic of Korea

Correspondence should be addressed to Yoon Sang Kim

Received 11 October 2017; Revised 31 December 2017; Accepted 16 January 2018; Published 20 February 2018

Academic Editor: Maristella Matera

Copyright © 2018 Seong-Wook Jang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recently, a mobile augmented reality (AR) system with AR technology that requires high performance has become popular due to the improved performance of smartphones. In particular, mobile AR that directly interacts with outdoor environments has been in development because of increasing interest in e-leisure due to improvements in living standards. Therefore, this paper aims to study tracking and augmentation in mobile AR for e-leisure. We analyzed the performance of human body tracking application implemented in a mobile system (smartphone) using three methods (marker-based, markerless, and sensor-based) for the feasibility examination of human body tracking in mobile AR. Furthermore, game information augmentation was examined through the implementation of mobile AR using two methods (marker- and sensor-based).

1. Introduction

PC-based augmented reality (AR) is evolving into mobile AR by the popularization of smartphones with high-resolution image sensors, GPS, and gyro sensors. In particular, mobile AR that directly interacts with outdoor environments such as Pokémon Go [1] is being noted because of increasing interest in e-leisure due to improvements in living standards. e-Leisure, the digital leisure culture that includes e-sports, e-games, and interactive media, is becoming a favorite activity by providing both sociocultural worth and enjoyment [2, 3]. Therefore, research into the mobile AR for e-leisure (i.e., e-leisure mobile AR) is required. Through this research, e-leisure mobile AR that can augment game information for outdoor sports such as baseball, basketball, and soccer in real time is being developed [47].

Tracking points of interest (POI) in the real world and obtaining the precise coordination of game information augmentation have become increasingly important for e-leisure mobile AR, and various studies are underway. Among the various studies on mobile AR-based object tracking, there is greater emphasis on studies in which the POI is tracking a human body [813].

Real-world objects (human bodies in this study) and the precise coordination of virtual data for realistic mobile AR are processed in three steps: positioning, rendering, and merging. In the positioning step, virtual data (known as virtual content) are converted in accordance with the smartphone’s location. In the rendering step, 3D objects involving virtual data are projected into 2D images. Finally, virtual data projected into 2D images are combined with the real world on the smartphone screen (viewport) in the merging stage.

Coordinating these three steps requires tracking the 3D locations of POIs with precision. The POI can be tracked via widely used methods based on vision and sensors [8]. Vision-based methods are categorized into marker-based and markerless tracking methods.

Marker-based tracking methods (Figure 1) are methods of tracking color markers or markers of specific forms [8]. Wagner [9] proposed a marker pattern for mobile AR and compared the performance with mobile AR using conventional markers. Marker-based tracking methods have the advantages of high stability and ease of implementation compared to markerless tracking methods, but the disadvantage that it is inconvenient because the marker is restricted to the subject.

Figure 1: Mobile AR system procedure implemented based on marker [14]. (a) Original image. (b) Corners. (c) Extracted marker edges and corners.

Markerless tracking (Figure 2) is a method of tracking a target’s features naturally without the attaching artificial markers [10]. Ziegler [11] evaluated tracking methods based on markerless suitable for mobile AR. Markerless tracking method has an advantage of recognizing the target’s rotation angle, direction, lighting changes, partial overlap, and so on based on the target’s features. However, markerless tracking method has the disadvantage that it cannot guarantee its real-time performance in a smartphone environment that has lower performance than a PC because the markerless tracking method generally requires more computation than marker-based tracking methods.

Figure 2: Mobile AR system procedure implemented based on the marker. (a, b) Mobile AR implemented using SIFT [13]. (c, d) Mobile AR implemented using HOG [16].

The sensor-based method (Figure 3) is a method of tracking a target using various sensors including magnetic, inertial, optical, and mechanical sensors [8]. Tan [12] and Pryss [13] implemented real-time mobile AR using sensor-based tracking methods. Sensor-based tracking methods generally have the advantage of rapid processing speed and the disadvantage that calibration and matching processes are required to solve problems because physical problems such as sensor errors and communication delays affect tracking performance.

Figure 3: Mobile AR system example implementation using sensors [17, 18].

In this paper, we study human body tracking and game information augmentation in e-leisure mobile AR. We examine human body tracking by performance measurement experiments in mobile applications based on marker-based, markerless, and sensor-based methods. Game information augmentation is examined by the implementation of e-leisure mobile AR using marker- and sensor-based methods.

This paper is organized as follows. Section 2 describes the existing cases of tracking and augmentation in mobile AR for e-leisure. Section 3 presents the experiments and the results of human body tracking and game information augmentation for e-leisure mobile AR applications. Finally, Section 4 presents the paper’s conclusion.

2. Tracking and Augmentation in Mobile AR for e-Leisure

This section introduces conventional studies of human body tracking and game information augmentation methods that can be applied to e-leisure mobile AR. The e-leisure mobile AR is performed through the process of detection area decisions, human body tracking, and game information augmentation, as shown in Figure 4. In the detection area decision, the POI is set at the game information augmentation location, whereas human body tracking follows the POI’s movement based on the three methods (marker, markerless, and sensor) in real time. Finally, the game information augmentation process augments game information via the tracked POI.

Figure 4: Mobile AR process.

Marker-based mobile AR can be implemented in various ways. Among them, the typical mobile AR implementation method uses a square marker that has easy-to-recognize three-dimensional (3D) position and rotation. Marker-based mobile AR implementation is composed of three stages as shown in Figure 1: (a) acquiring the original image from the camera; (b) estimating the outline of the white boundary area from the connected components; and (c) augmenting the fine location of virtual data for marker patterns using extracted edges and corners.

Tracking human bodies using markers requires attaching the markers to all targets, which is impractical. Studies have been conducted on the use of environmental information of real-world data instead of markers to overcome this disadvantage [15]. There are markerless tracking methods that use the features data of natural objects in the real world to understand how to use environmental information.

Markerless mobile AR implementation methods include Scale-Invariant Feature Transform (SIFT) [13] and Histogram of Oriented Gradient (HOG) [16]. SIFT compares the original image to the feature points of objects. Even if an object is covered, as shown in Figure 2(a), it can still be identified through its feature points. HOG was proposed for tracking pedestrians using detected feature points such as color brightness or the directional distribution of area. This method is suitable if a target has been rotated or if the inner pattern is simple and the object can be identified from the target’s outline. Figure 2(b) shows the results of detecting human bodies with multiple targets using HOG.

Sensor-based mobile AR provides virtual data based on location data acquired through the GPS as shown in Figure 3 [17, 18]. Since sensor-based mobile AR uses location data, it can be easily expanded to various services by replacing the augmented virtual data.

Sensor-based mobile AR provides virtual data based on location data measured by the sensor. Sensor-based mobile AR implementation is performed in two steps: (1) measuring the location of the target though the sensor and (2) matching the sensor coordinate system to the image coordinate system.

A mismatch occurs if the matching error between the sensor coordinate system and the image coordinate system is large [1924]. A typical example of the mismatch phenomenon is mobile AR using GPS. Because GPS-based mobile AR uses a two-dimensional (2D) coordinate system that consists of latitude and longitude, it does not provide height information between the target and the ground surface. Therefore, GPS-based mobile AR has the problem that the virtual data’s location in the image is inaccurate. Improving the accuracy of the sensor-based mobile AR requires calibration for mismatching. Recently, communication technologies have started being used in conventional research. An example method based on communication networks is the Wi-Fi positioning system (WPS), which tracks the location of devices by searching for their Wi-Fi access point location [25].

Game information augmentation’s general aim is to provide information to users by either highlighting or annotating objects or humans [26]. Figure 5(a) shows an example [27] in which game information augmentation has been applied to a basketball game: the game information is augmented by highlighting the surroundings of the objects of interest using bright circles. Here, augmented game information includes the player number, team color, and play direction. Figure 5(b) shows an example [28] in which game information augmentation has been applied to a material management system. The marker attached to the object of interest provides material information to users via annotations, and the provided material information is used to augment the screen of the mobile system (smartpad) based on a marker that exists in the real world.

Figure 5: Examples of game information augmentation using highlighting and annotation. (a) Game information augmentation example using highlighting [27]. (b) Game information augmentation example using annotation [28].

3. Experimental Results and Discussion

This section examines the human body tracking and game information augmentation methods for application to e-leisure mobile AR as shown in Table 1. Human body tracking methods are implemented in the first experiment, and the feasibility of applying such methods to mobile devices is examined based on performance measurement experiments. The second experiment examines the feasibility of applying game information visualization to mobile devices through the implementation of e-leisure mobile AR.

Table 1: Summary of experiments for e-leisure mobile AR.

We examine the feasibility of human body tracking methods in mobile devices by comparing the performance of implemented applications using various human body tracking methods. Markerless and marker- and sensor-based human body tracking were implemented using color markers, HOG, and IMU sensors, respectively.

Three experiments were conducted with respect to the three conditions (capability, resolution, and the number of people). The first condition used low-capability (Galaxy S4, 1.6 GHz CPU) and high-capability (Galaxy S6, 2.1 GHz CPU) smartphones to determine the effect on performance. The second condition used low-resolution (320 × 240) and high-resolution (960 × 720) input images (30 fps) to determine the effect on resolution. The third condition checked the effect on different number of people (single user versus multiple users).

In this experiment, marker-based human body tracking applications were implemented using uniforms as color markers [29, 30], as shown in Figure 6, because those in e-leisure (game and sports) environments often wear uniforms. The color marker method extracts POI using the HSV color space and threshold value. Here, the HSV color space was selected because it is robust against shadows and unequal illumination [29, 30]. The marker-based human body tracking application was implemented using the OpenCV Library connected to the Android software development kit (SDK) and native development kit (NDK).

Figure 6: An application implemented using color-marker-based human body tracking. (a) Tracking results for a single user. (b) Tracking results for multiple users (two people). (c) Tracking result for multiple users (three people). (d) Tracking result for multiple users (four people). (e) Tracking result for multiple users (five people).

Figure 6 shows the results of the color-marker-based human body tracking applications. Figure 6(a) displays the tracking result for a single user, and Figures 6(b)6(e) display the tracking results of multiple users (two to five people). In tracking results, a white rectangle means the tracked human body, and a group means a group of users identified based on the color of the marker, red or green.

Many calculations are required for markerless human body tracking. Therefore, this section implements applications using the HOG algorithm, which has been applied to smartphones and is less complex than other markerless human body tracking methods. Markerless human body tracking is used in the OpenCV Library combined with the Android SDK and NDK to implement applications such as marker-based tracking.

Figure 7 shows the results of markerless human body tracking applications. Figure 7(a) displays the tracking results for a single user, and Figure 7(b) displays the tracking results of multiple users (five). In the tracking results, a white rectangle indicates a tracked human body.

Figure 7: An application implemented using markerless human body tracking. (a) Tracking result for a single user. (b) Tracking result for multiple users.

Experiments were conducted to measure the recognition rate of marker-based and markerless human body tracking methods in low- and high-resolution images. The sensor-based human body tracking performance was affected by the sensor’s accuracy, and the measurement experiment of the recognition rate for the sensor-based human body tracking method was excluded because the sensor’s accuracy is not in this paper’s scope. The recognition rate was measured by dividing the total number of images by the number of images that were successfully tracked.

Table 2 shows the results of the recognition rate of human body tracking methods for marker-based and markerless in low- and high-resolution images. The recognition rate of the marker-based human body tracking method was 96% for both high- and low-resolution images. These results confirm that the recognition rate can be maintained at a low resolution when using a large-size marker such as a uniform. The recognition rate of the markerless human body tracking method was 82% at high resolution and 62% at low resolution, which confirms that the recognition rate in markerless human body tracking increases when using a higher resolution.

Table 2: The recognition rate for human body tracking methods based on marker and markerless systems according to low- and high-resolution images.

Sensor-based human body tracking is implemented in three steps using IMU, server, and smartphone, as shown in Figure 8. In the first step, the IMU attached to the smartphone transmits the measured 3D position data to the server. In general, IMU measurement data has a large cumulative error. In this experiment, measurement data with a small error were selected through repeated experiments in the same environment. In the second step, the server converted the 3D position data received from the IMUs for calibration into 2D position data of smartphones displayed on the screen (via orthographic projection). Converting 3D position data based on a real-world coordinate system into 2D position data on the user screen requires synchronizing the coordinate system of the 3D position data. In the implemented sensor-based human body tracking, the coordinate system origin of the 3D position data (the IMU’s initial position) was set the same, and the coordinate system of the 3D position data was synchronized. In the final step, the smartphone receives the converted 2D position data from the server.

Figure 8: Block diagram of an application using sensor-based human body tracking.

Figure 9 displays the sensor-based human body tracking display. The implemented application augments the specific symbol (user identification number and white rectangle) by identifying the user as shown in Figures 9(a) and 9(b).

Figure 9: An application implemented using sensor-based human body tracking. (a) Tracking result for a single user. (b) Tracking result for multiple users.

Table 3 shows the experimental results measuring the performance (FPS) of the human body tracking methods according to the smartphone’s performance, the image resolution, and the number of objects. The measurement results show the FPS difference in human body tracking methods according to the smartphone performance and the image resolution, as indicated largely in the order of markerless, marker-based, and sensor-based. The FPS difference according to the increase in tracked targets was largely not indicated in all three human body tracking methods. Marker-based human body tracking using low-resolution (average greater than 35 fps), markerless human body tracking using low-resolution (average over 15 fps), and sensor-based human body tracking (average 60 fps or higher) considering the real-time performance (over 15 fps), that could be applied to mobile AR. However, the recognition rate in markerless human body tracking methods (Table 2) indicates a low recognition rate for low-resolution images and the disadvantage that the user cannot be identified. Therefore, we use marker- and sensor-based methods in experiments to examine game information augmentation methods for e-leisure mobile ARs.

Table 3: Experimental results of human body tracking methods: performance comparisons with respect to capability, resolution, and the number of people (unit: fps).

We examined the feasibility of applying game information augmentation to mobile devices by implementing an e-leisure mobile AR using marker- and location-based human body tracking. An e-leisure mobile AR system that was developed for game information augmentation is performed through the processes of detection area decisions, human body tracking, and game information augmentation. The detection area is determined by categorizing outdoor sports into two groups (offense and defense) based on the actions involved, and the main body parts each group used were analyzed as shown in Figure 10 [31]. In the results of analysis, the hand using an object such as a sword or ball was derived as the detection area in the offensive group. The upper body with equipment such as protective gear was derived as a detection area in the defensive group.

Figure 10: Detection area deducted from outdoor-sports analysis for game information augmentation.

Game information augmentation was achieved in detection areas, including the hands and upper body. Examples of game information augmentation in detection areas were analyzed using conventional contents. Figure 11 shows the selected game information for augmentation from conventional content. The detected game information augmentations for the hands and upper body are as shown in Figure 11(a) (fire, sword, and flag) and Figure 11(b) (target, status, and epaulet), respectively.

Figure 11: Experimental results obtained through game information augmentation. (a) Augmentation on the detected hand region. (b) Augmentation on the detected upper body region.

Figure 12 shows the overall performance of human body tracking and game information visualization on a mobile AR system through the processes of detection-area decision, human body tracking, and game information visualization: The user’s hands and upper body are set as POIs when deciding the detection area: during human body tracking, the human body is tracked using color markers and sensors. Finally, the game information is augmented for the user of interest among the various users.

Figure 12: Overall block diagram of the mobile AR for e-leisure.

As shown in Figure 13, the mobile AR system provides functions for selecting the HOI, visualizing game information, and visualizing the first-person shooter. The function for selecting the HOI selects the activation of user visualization (users 1, 2, and 3), and the game information visualization function activates virtual game symbols (target, status (heart), and epaulet) to the HOI.

Figure 13: Layout of the developed mobile AR for e-leisure.

Figure 14 shows the results of the game information augmented by the mobile AR: Figure 14(a) shows the screenshot for human body tracking in the marker-based mobile AR. Figure 14(b) shows the game information (target) obtained by identifying users in the marker-based mobile AR. Figure 14(c) shows a screenshot obtained for human body tracking in the sensor-based mobile AR. Finally, Figure 14(d) shows a screenshot for augmenting game information (sword) through user identification in the sensor-based mobile AR. The performance measurement results for the implemented mobile AR with the high-capacity smartphone at high resolution were confirmed to have stable operation (real-time tracking and visualization were possible without breaks) at 20 fps (marker based) and 62 fps (location based).

Figure 14: Results of game information augmentation applied to the mobile AR. (a) Result of marker-based human body tracking. (b) Result of marker-based game information augmentation. (c) Result of sensor-based human body tracking. (d) Result of sensor-based game information augmentation.

4. Conclusions

This paper examines methods of human body tracking and game information augmentation in e-leisure mobile AR. The application of human body tracking was examined by implementing mobile AR via three methods. In the experimental results, marker-based human body tracking at low resolution (over average 35 fps), markerless human body tracking at low resolution (over average 15 fps), and sensor-based human body tracking (over average 60 fps) considering real-time performance (over 15 fps) could be applied to mobile AR. However, it confirmed that the markerless tracking method was not suitable for the e-leisure mobile AR environment (outdoor with multiple users) considering user identification and recognition according to resolution.

Game information augmentation was examined through the implementation of marker- and location-based mobile AR systems. The performances of the marker- and location-based mobile AR systems were 20 and 62 fps, respectively, and the results confirmed that human body tracking and game information visualization methods operated stably on the smartphone. Furthermore, we confirmed that it provides interactions among users that conventional mobile ARs could not provide and derived a methodology for selecting the game information to be augmented. These results are expected to be greatly helpful in the study of e-leisure mobile AR.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research is supported by the Ministry of Culture, Sports, and Tourism (MCST) and Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) Research & Development Program.

References

  1. Pokémon Go, https://www.pokemongo.com.
  2. G. Nimrod and H. Adoni, “Conceptualizing e-leisure,” Loisir et Société/Society and Leisure, vol. 35, no. 1, pp. 31–56, 2012. View at Publisher · View at Google Scholar
  3. M. E. Son, N. H. Kim, and I. D. Kong, “Positive correlation between cyber leisure time and sympathetic activity in high school students,” FASEB Journal, vol. 26, pp. 705–707, 2012. View at Google Scholar
  4. S. O. Lee, S. C. Ahn, J. I. Hwang, and H. G. Kim, “A vision-based mobile augmented reality system for baseball games,” in Proceedings of the International Conference on Virtual and Mixed Reality, Orlando, FL, USA, July 2011.
  5. S. Bielli and C. G. Harris, “A mobile augmented reality system to enhance live sporting events,” in Proceedings of the ACM the 6th Augmented Human International Conference, Singapore, March 2015.
  6. A. Kraft, “Real time baseball augmented reality,” Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA, 2011, Tech. Rep. WUCSE-2011-99. View at Google Scholar
  7. E. Cheshire, C. Halasz, and J. Perin, Player Tracking and Analysis of Basketball Plays, 2015.
  8. F. Zhou, H. B. Duh, and M. Billinghurst, “Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR,” in Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, pp. 193–202, Cambridge, UK, September 2008.
  9. D. Wagner, T. Langlotz, and D. Schmalstieg, “Robust and unobtrusive marker tracking on mobile phones,” in Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, Cambridge, UK, September 2008.
  10. D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  11. E. Ziegler, Real-Time Markerless Tracking of Objects on Mobile Devices, The Pennsylvania State University, State College, PA, USA, 2009, Doctoral thesis.
  12. Q. Tan and W. Chang, “Location-based augmented reality for mobile learning: algorithm, system, and implementation,” Electronic Journal of e-Learning, vol. 13, pp. 138–148, 2015. View at Google Scholar
  13. R. Pryss, P. Geiger, M. Schickler, J. Schobel, and M. Reichert, “Advanced algorithms for location-based smart mobile augmented reality applications,” Procedia Computer Science, vol. 94, pp. 97–104, 2016. View at Publisher · View at Google Scholar · View at Scopus
  14. ARToolkit, https://archive.artoolkit.org/documentation.
  15. O. Jihyun, Hybrid Augmented Reality System on the Mobile Platform, Hanyang University, Seoul, Republic of Korea, 2008, M.S. thesis.
  16. C. J. Seo and H. I. Ji, “Pedestrian detection using HOG feature and multi-frame operation,” Transactions of the Korean Institute of Electrical Engineers, vol. 64, no. 3, pp. 193–198, 2015. View at Publisher · View at Google Scholar
  17. A. L. S. Ferreira, S. R. dos Santos, and L. C. de Miranda, “TrueSight a pedestrian navigation system based in automatic landmark detection and extraction on android smartphone,” in Proceedings of the 14th Symposium on Virtual and Augmented Reality (SVR), Washington, DC, USA, May 2012.
  18. Wikitude Drive, http://www.wikitude.com/en/drive.
  19. S. K. Choi, “The case analysis of augmented reality contents services and business forecast,” KSII Transactions on Internet and Information Systems, vol. 12, pp. 51–62, 2011. View at Google Scholar
  20. G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), Washington, DC, USA, November 2007.
  21. A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052–1067, 2007. View at Publisher · View at Google Scholar · View at Scopus
  22. P. Keitler, F. Pankratz, B. Schwerdtfeger et al., “Mobile augmented reality based 3D snapshots,” in Proceedings of the 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Orlando, FL, USA, October 2009.
  23. S. J. Velat, J. Lee, N. Johnson, and C. D. Crane, “Vision based vehicle localization for autonomous navigation,” in Proceedings of the IEEE International Symposium Computational Intelligence in Robotics and Automation (CIRA), Jacksonville, FL, USA, June 2007.
  24. J. Y. Lee and J. S. Kwon, “Error correction scheme in location-based AR system using smartphone,” Journal of Digital Contents Society, vol. 16, no. 2, pp. 179–187, 2015. View at Publisher · View at Google Scholar
  25. H. B. Shim, “Comparative analysis for advanced technologies of the location based service,” Journal of the Korea Institute of Information and Communication Engineering, vol. 16, no. 4, pp. 853–871, 2012. View at Publisher · View at Google Scholar
  26. D. Schmalstieg, “Augmented reality methods in games,” in Proceedings of the 4th IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), Washington, DC, USA, October 2005.
  27. P. Baudisch, H. Pohl, S. Reinicke et al., “Imaginary reality gaming: ball games without a ball,” in Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St Andrews, UK, October 2013.
  28. E. Rosten, G. Reitmayr, and T. Drummond, “Real-time video annotations for augmented reality,” in Proceedings of the International Symposium on Visual Computing (ISVC), Lake Tahoe, NV, USA, November 2005.
  29. H. Belghit, N. Zenati-Henda, A. Bellabi, S. Benbelkacem, and M. Belhocine, “Tracking color marker using projective transformation for augmented reality application,” in Proceedings of the IEEE International Conference on Multimedia Computing and Systems (ICMCS), Tangiers, Morocco, May 2012.
  30. A. Bellarbi, S. Benbelkacem, N. Zenati-Henda, and M. Belhocine, “Hand gesture interaction using color-based method for tabletop interfaces,” in Proceedings of the 7th International Symposium on IEEE Intelligent Signal Processing (WISP), Floriana, Malta, September 2011.
  31. S. W. Jang, J. H. Ko, H. J. Lee, and Y. S. Kim, “A study on human body tracking and game information visualization for mobile AR,” International Journal of Mobile Device Engineering, vol. 1, no. 1, pp. 15–20, 2017. View at Publisher · View at Google Scholar