About this Journal Submit a Manuscript Table of Contents
International Journal of Distributed Sensor Networks

Volume 2014 (2014), Article ID 430907, 12 pages

http://dx.doi.org/10.1155/2014/430907
Research Article

Stereoscopic Media Art That Changes Based on Gender Classification Using a Depth Sensor

1Graduate School of Advanced Imaging Science, Multimedia & Film, Chung-Ang University, 221 Huksuk-dong, Dongjak-ku, Seoul 156-756, Republic of Korea

2Center of Human-Centered Interaction for Coexistence, KIST, No. L8325, Hwarangno 14-gil 5, Seongbuk-gu, Seoul 136-791, Republic of Korea

Received 18 February 2014; Accepted 3 April 2014; Published 9 June 2014

Academic Editor: Sabah Mohammed

Copyright © 2014 YoungEun Kim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Physical and psychological characteristics of people vary depending on gender and are used in various fields including the arts such as media art. An interface that provides different interaction results based on the gender of the users enhances participant satisfaction. For gender classification in a dark environment such as an exhibition hall, a depth sensor that discerned between the human head and the body and a support vector machine (SVM) that classified internal factors were used. In terms of the stereoscopic media art, factors that influenced the audience were set to be color, depth, and velocity in a certain direction, and a survey was conducted to examine the preferences of men and women. After gender classification, the preference factors of men and women were applied to produce interactive media art that showed different results depending on the interaction. The possibility of the interface based on gender classification was identified through the survey on preferences between a conventional interactive system and an interactive system based on gender classification.

1. Introduction

Every person has different physical and psychological characteristics. Physical characteristics including the body shape, voice, and walking pattern vary depending on gender and age [1]. Emotional and psychological characteristics including concentration, memory, and visual and spatial recognition abilities also differ [2, 3]. Hence, demographic characteristics were studied for interface design. An interface in which gender characteristics are applied can be effectively used for human-computer interaction, biometrics, web design, learning, demographic collection, and target advertising [46]. As the use of gender-based interaction technology has expanded into the field of arts such as media art beyond industrial fields, immersion and satisfaction of the audience are expected to increase.

Information obtained through interaction with the user is divided into passive information, such as age and gender, which is extracted from the user using an image and an audio sensor, and active information such as the character input and motion. By installing sensors that extract passive information not to be recognized by the user, natural interactions can be performed. While an active input method that requires sensor installation and recognition of instructions shows different competency levels and efficiency levels by user, it can obtain more accurate data than the passive sensor interface [7]. Old people are familiar with the conventional interface such as a keyboard and a mouse whereas young people quickly adapt to the natural interface such as touch-based and motion-based interfaces. In particular, a spatial interface such as a motion-based interface can provide comfortable and efficient interactions considering human physical characteristics [8]. When physical characteristics of the user combine with passive and active information input through the sensor, user-based interaction can be effectively designed.

For user-based interaction, various sensors and algorithms that automatically classify human characteristics are used. To distinguish gender, vision-based recognition that uses factors including ears, face, fingerprint, hands, and gait and audio-based recognition that uses factors such as voice and speech are used [4, 9]. Special factors such as DNA, palmar image, and facial and hand thermogram can be used for distinguishing people [10, 11]. However, the face and the body are frequently used for classifying gender and age in real time. SexNet showed a possibility of classifying gender by using the face of the user [12], and the recognition rate was considerably improved because of the development of algorithms such as the support vector machine (SVM) [1315]. Face recognition system improves the sensing coverage area by integrating into a wireless sensor network [16]. While the installation location of the sensor for facial measurement is limited, the gait analysis method can classify gender without notifying the user by installing the camera on the side. For the image, a model-based method that analyzes the human structure and uses information on the skeleton depending on a gait cycle is used [17]. Meanwhile, an appearance-based method uses the silhouette of the user who is walking in order to classify people by dividing the entire image into images of body parts and analyzing them or analyzing the pattern of the entire image [18, 19].

Additional information for gender classification can be obtained by analyzing the image of the body shape. Information on the body shape, hairstyle, clothes, and accessories is used for classifying gender, leading to a decrease in errors [20]. Gender classification can be performed by analyzing the face geometry of the 3D depth data [21]. In particular, the wide supply of low-price depth cameras such as Microsoft Kinect is expected to be applied in various fields. Low-cost depth cameras exhibit a lower gender recognition rate than the image-based method because of low resolution. However, as the latter is influenced by the surrounding environment, its recognition rate decreases in dark environments such as an exhibition hall because of the considerable influence of lighting and exposure. A multimodal gender classification method that simultaneously uses image data and 3D depth data is also used [22].

Methods of outputting interaction results and emotions of the user vary depending on the purpose of the systems. The visual image and sound are representative output methods and provide more realistic sensations through the application of recent spatial technology. 3D audio technology for sound and stereoscopic technology for visual effects are applied to provide spatial sensations and entertainment and promote active audience participation [23]. The output interface also shows a different ability to recognize output data and preferences depending on characteristics of the user such as gender and age [24].

Studies on interface design considering the user characteristics beyond the simple interaction method are recently being conducted. The interface based on user characteristics in which the intuitive input interface and the artificial intelligence technology are applied increases user satisfaction [25]. The output interface enhances work efficiency by displaying comfortable and effective images or satisfies the user by providing emotional images and sound. Convenience can increase by applying physical characteristics by age and gender in automobile design, or work efficiency can be improved by applying them in the workspace [1]. Interface technology has been developed in the fields of education, gaming, and entertainment, and it is being expanded into the field of media art using sensor technology [26].

2. Human Factors in an Interactive Media Art

2.1. Previous Works

The application of characteristics of gender and age for interface design began in various fields, and the characteristics differ on the basis of the application field. Studies were conducted to identify the influence of user characteristics on interactive media art [26].

As shown in Figure 1, “Garden Party 1” and “Garden Party 2” are stereoscopic interactive media art that have the same themes and images. A touch-based interface was used in “Garden Party 1,” whereas a motion-based interface was used in “Garden Party 2.” The two artworks were installed in the media art exhibition, and a survey was conducted to investigate the preferences of the audience for the input interface and the motion and color of objects shown through stereoscopic images.

fig1
Figure 1: Architecture of Garden Parties 1 and 2.

The survey was classified on the basis of age (20s, 30s, and 40s–50s) and gender (men: 18; women: 18), and emotions on the input interface, the depth change of the object in the image, and color were examined. In terms of the input interface, men prioritized the motion-based interface; old audiences prioritized the touch-based interface. As shown in Figure 2, preference for the interaction of the stereoscopic image depending on gender showed an insignificant difference. In general, the fast interaction (FI) object was preferred to the slow interaction (SI) and noninteraction (NI) objects, and the large depth (LD) change was preferred to the small depth (SD) change. Men showed a high preference for objects that had FI and LD characteristics, whereas women showed a higher preference for color change (CC) characteristics than men. According to the survey results, preferences by age and gender were not clearly defined. However, preferences were likely to vary depending on interactive factors including changes in the interface, color, and velocity in the media art. To design interactive media art that reacts to the gender of the audience on the basis of previous studies, the study on preference for color, depth, and velocity in a certain direction was conducted.

fig2
Figure 2: Survey of input and output interface.
2.2. Adaptive Color

Color is the factor that has a considerable visual influence on people, and the preference for it differs depending on age and gender. The comfort level varies on the basis of the lighting color, and it affects the ongoing tasks [2]. Preferences for color by gender are also used for designing a virtual environment system, and a study on the preferences of both heterosexual and bisexual people was also conducted [2729]. Color preference is influenced by the ethnicity and culture of the experimental subjects, and the result varies depending on the experimental conditions. As the background color and the atmosphere of artworks could change the survey result, the survey on preference by gender was conducted by adjusting the color of the artwork to be produced. As depicted in Figure 3, seven colors, namely, green, red, yellow, orange, purple, pink, and blue that both male and female subjects showed different preferences for, were selected, and the survey was conducted by adjusting the color of the artwork.

fig3
Figure 3: Color samples for a new artwork.

The survey targeted art students (52 males and 51 females). As shown in Figure 4, men preferred colors in the order of green (1.92%), red (26.92%), yellow (9.62%), orange (3.85%), purple (15.38%), pink (17.31%), and blue (25.0%), while women preferred colors in the order of green (3.92%), red (35.29%), yellow (13.73%), orange (11.76%), purple (11.77%), pink (15.69%), and blue (7.84%).

430907.fig.004
Figure 4: Color preference for media art.

As a result of analyzing the color preference by gender, we concluded that red generally received the highest preference whereas green had the lowest preference. Women showed higher preference for warm colors such as red, orange, and yellow than men. On the other hand, men were likely to prioritize cool colors such as blue, purple, and pink.

2.3. Adaptive Depth Velocity

Every person has different space perception abilities as well as abilities to recognize stereoscopic images. Accordingly, the study was conducted to analyze the difference by age and gender in the workspace by using stereoscopic images. The abilities differ by age and person in a medical analysis of stereoscopic images, while the difference is not clearly identified in a statistical analysis [30]. Men show higher performance on spatial model recognition and stereoscopic images than women, but a clear reason for this has not been found thus far [31]. Preference by gender tends to differ on the basis of the stereoscopic image equipment [32]. As effects by gender are different depending on the environment using the stereoscopic image, the application field, and tendency of the audience, the preference for stereoscopic motion in the media art work was investigated on the basis of previous studies [33]. The three factors to apply adaptive depth velocity that vary depending on gender are shown in Table 1.

tab1
Table 1: Characteristics of depth elements.

As hands in the image move on the basis of the motion of human hands while blowing particles from hands to the flowers inside, an interaction occurs and the depth value significantly changes. When day becomes night, the ground and the background are divided into small meshes that are blown to the audience and then return from the opposite direction. Petals in fireworks have the complex movement of spirally advancing to the audience.

The graph shown in Figure 5 shows the survey result depending on the velocity change of three objects. A higher velocity was prioritized for the hand particle that had a fast interactive factor and a high depth change. The ground and the background that had simple movements did not show a significant difference by gender. Men showed a slightly higher preference for hand particles than women, whereas women showed a higher preference for “petals in fireworks” that exhibited complex and beautiful images without interaction than men. In general, a considerable number of the audiences desired a higher velocity of three objects; the weights of the velocity were set as shown in Table 2.

tab2
Table 2: Weights of velocity.
430907.fig.005
Figure 5: Depth velocity preference of media art.

For the artwork used for the survey, particle velocity was defined as , mesh velocity of the ground and the background as , and velocity of petals as . Survey weights were applied to the velocity of the new artwork, and Formulas (1) were defined to prevent dizziness because of stereoscopic effects considering the characteristics of the artwork:

3. Gender Classification

Methods that classify gender using the human shape are divided into methods using the image input through the camera and methods using the depth camera such as Kinect. The human shape can be detected using the image, and color information including hair color and makeup can be used. Human characteristics can be detected using the geometric location of the depth data. While the depth sensor has stable data values beyond external influences such as light, its resolution of 640 × 480 pixels is lower than that of the image sensor. In the general environment, gender classification using images exhibits a higher recognition rate than that using the depth value. To increase the immersion level, dark lighting is maintained in environments such as the media exhibition. As the amount of light is small, it is unlikely to obtain an accurate image by using an image sensor such as a web camera, and it is also difficult to classify gender. The use of the depth value can maintain the same shape data even in a dark environment.

To compare the gender recognition rate between images of the webcam and Kinect in a dark environment, the Biwi Kinect head pose database was used as shown in Figure 6 [34]. The dataset consists of poses to watching the various directions. To confirm the gender recognition rate using the image in the dark environment, images with six stages of different brightness (B1~B6) were created by adjusting the brightness of the images of the dataset as indicated in Figure 7.

430907.fig.006
Figure 6: Biwi Kinect head pose database.
430907.fig.007
Figure 7: Dataset with adjusted brightness.

For image-based gender classification, shown in Figure 8, face detection was performed using a Haar-like feature in the dataset image [35]. The detected face image was normalized to be 32 × 32 pixels. After inputting the face dataset in the classifier SVM by using a radial basis function (RBF) kernel, gender classification was performed.

430907.fig.008
Figure 8: Face detection and normalization.

Further, the SVM was used for gender classification using the 3D depth value. As shown in Figure 9(a), the input data have the combined image of the person and the background. By measuring the distance between the Kinect sensor and the human, we can separate the person from the background in the depth map. After the background and the person are separated, noise from the human shape is removed and the human shape data are completed through a hole filling algorithm. The human shape data are divided into the head data and the body data. The location of the nose can be predicted on the basis of the top of the head, and that of the end of the nose on the basis of the high depth value. The head and the body can be distinguished by sequentially identifying the location of the eyes, the mouth, and the neck on the basis of the location of the nose.

fig9
Figure 9: Face and body detection with depth data.

The curves of depth data of face are used for extracting facial components. The depth curve in plane and plane based on feature points is used to efficiently analyze the surface of face. The characteristics such as nose tip, glabella, forehead, eyes, and lips are classified, and the regions of each characteristic are divided as shown in Figure 10. The calculated volume and surface area of the divided region are used as the classifiers.

430907.fig.0010
Figure 10: Face shape and four facial regions.

Men and women can be also classified on the basis of the characteristics of the body. Men have broader shoulders than women, whereas women have more prominent breasts than men. As shown in Figure 11, as the distribution of the height and curvature of breasts varies, the average curvature was used as the classifier.

fig11
Figure 11: Curvature and breast height visualization of upper body.

As such, classification was performed using two classifiers and the SVM. The gender recognition rate using the random images which are watching various directions was applied in dataset images having six-stage brightness, and the same depth value of the gender recognition algorithm using the depth value was used. Figure 12 shows that the recognition rate of the gender classification using image data decreases as the environment becomes darker and the gender recognition rate using depth data was not influenced by environmental changes. The recognition rate was lower when the depth data were used than when the image method was used in a bright environment. However, recognition rate of the depth method was higher than using the image method in a dark environment.

430907.fig.0012
Figure 12: Gender classification.

4. Design of Adaptive Interactive Art

The produced adaptive interactive art is designed through two interactive levels. In the case of the first physical interactive method, flowers were painted in response to the movement of the audience in the stereoscopic image. The second method provides internal interaction that is not notified to the audience by recognizing the gender of the user and reflecting the gender characteristics to the audience. As indicated in Figure 13, one Kinect sensor was used as the input sensor to recognize the movement of the user and classify gender, while a 3D projector was used as the display equipment to provide the stereoscopic images.

430907.fig.0013
Figure 13: Adaptive interactive system.

As shown in Figure 14, gender classification is performed on the basis of the 3D depth by using the depth data input through the Kinect sensor. The gender result is transferred to a color generator and a velocity generator and changes the data based on the artwork process. Images of the artwork are shown to the audience through a stereoscopic display while changing their color and velocity.

430907.fig.0014
Figure 14: Adaptive interaction process.

Until the system classifies the gender of the audience, the color generator provides interactions with a variety of mixed colors as shown in Figure 15(a). When gender classification is completed, flowers are interactively painted using colors according to gender. Red, which received the highest preference, was selected as the common adjusted color. Accordingly, as shown in Figure 15(b), the ratio of orange and yellow flowers increased in the case of women. Meanwhile, as displayed in Figure 15(c), the ratio of blue, pink, and purple flowers increased in the case of men.

fig15
Figure 15: Color ratio of adaptive artwork.

As shown in Figure 16, flowers are created according to the average rate of colors of green (2.92%), red (31.11%), yellow (11.68%), orange (7.81%), purple (13.58%), pink (16.5%), and blue (16.42%), in a section in which gender is not recognized. After the gender is classified at by using the color generator, the ratio of preference by gender is created up to and maintained to by adjusting the color of the created flowers in a changing section.

fig16
Figure 16: Color generator.

The velocity generator determined , , and , the velocity values appropriate for men and women in terms of three objects. When the gender of the audience is not classified, the average value is maintained. When gender is classified at , as shown in Figure 17, the velocity becomes linear and the velocity by gender is maintained from to .

430907.fig.0017
Figure 17: Velocity generator.

A survey was conducted after installing the previously displayed interactive system and the system that simultaneously changed the color and the velocity depending on the gender. According to the survey result shown in Figure 18, interactive media art that changed color and velocity on the basis of gender generally received a high preference. Accordingly, women showed higher preference than men.

430907.fig.0018
Figure 18: Investigation of preference for two systems.

5. Conclusion

The characteristics of the audience towards the artwork in various exhibitions were found to vary. In media art including technical factors, interface design is crucial. On the basis of a study using stereoscopic image interactive art, a study on media art that changed by gender was conducted as the audience showed different preferences for changes in color and the depth value depending on age and gender.

A sensor was used for detecting an object, and technology that identifies the age and gender of a person was developed on the basis of the image. The use of 3D depth image was limited because of the size and cost of the equipment. However, the application of recognition technology using the depth data increased with the sales of low-cost depth cameras. The depth shape of the person was divided into the face and the body, and gender classification was performed by adjusting the classifiers of the SVM. In the general environment, the gender classification method using the image exhibited a higher recognition rate than that using the depth data. Meanwhile, in a special environment such as a dark exhibition hall, the method using the depth data exhibited a higher recognition rate.

The application of interaction technology that provided specialized information for the user by classifying gender and revealed statistical characteristics was expanded into the field of media art. First, a survey on the color, depth, direction, and movement of the stereoscopic image was conducted. On the basis of the survey result, the color and the object velocity by gender were redetermined, and a system that applied the color and velocity changes on the basis of the gender classification was designed. A survey on interactive artwork that only reacted to human movement and interactive artwork that showed different results in response to the movement depending on gender was conducted. As a result, the system that showed different results according to gender was prioritized.

Further studies will be conducted to distinguish age using the depth data and increase the accuracy of age and gender identification by using voice data, such as the voice of the audience. Moreover, studies on a gender-based interface will be continuously conducted by applying a gender classification technology in various interactive systems for stereoscopic media art, games, and education.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. D. B. Chaffin, J. J. Faraway, X. Zhang, and C. Woolley, “Stature, age, and gender effects on reach motion postures,” Human Factors, vol. 42, no. 3, pp. 408–420, 2000. View at Scopus
  2. I. Knez and C. Kers, “Effects of indoor lighting, gender, and age on mood and cognitive performance,” Environment and Behavior, vol. 32, no. 6, pp. 817–831, 2000. View at Scopus
  3. J. Chin and W.-T. Fu, “Interactive effects of age and interface differences on search strategies and performance,” in Proceedings of the 28th Annual CHI Conference on Human Factors in Computing Systems (CHI '10), pp. 403–412, April 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. C. B. Ng, Y. H. Tay, and B. M. Goi, “Vision-based human gender recognition: a survey,” http://arxiv.org/abs/1204.1611.
  5. J. Chin, W. T. Fu, and T. Kannampallil, “Adaptive information search: age-dependent interactions between cognitive profiles and strategies,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09), pp. 1683–1692, Boston, Mass, USA, 2009.
  6. Y.-S. Wang, M.-C. Wu, and H.-Y. Wang, “Investigating the determinants and age and gender differences in the acceptance of mobile learning,” British Journal of Educational Technology, vol. 40, no. 1, pp. 92–118, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. E. Farella, A. Pieracci, L. Benini, L. Rocchi, and A. Acquaviva, “Interfacing human and computer with wireless body area sensor networks: the WiMoCA solution,” Multimedia Tools and Applications, vol. 38, no. 3, pp. 337–363, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Wagner, M. Nancel, S. Gustafson, S. Huot, and W. E. Mackay, “Body-centric design space for multi-surface interaction,” in Proceedings of the 31st Annual CHI Conference on Human Factors in Computing Systems: Changing Perspectives (CHI '13), pp. 1299–1308, Paris, France, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. S. A. Khan, M. Nazir, N. Riaz, and N. Naveed, “Computationally intelligent gender classification techniques: an analytical study,” International Journal of Signal Processing, Image Processing & Pattern Recognition, vol. 4, no. 4, pp. 145–156, 2011.
  10. A. K. Jain, A. Ross, and S. Prabhakar, “An Introduction to Biometric Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 4–20, 2004. View at Publisher · View at Google Scholar · View at Scopus
  11. M.-H. Wang and P.-Y. Chen, “Using extension theory to design a low-cost and high-accurate personal recognition system,” International Journal of Distributed Sensor Networks, vol. 2013, Article ID 952568, 12 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. B. A. Golomb, D. T. Lawrence, and T. J. Sejnowski, “SexNet: a neural network identifies sex from human faces,” in Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems, vol. 3, pp. 572–579, 1990.
  13. B. Moghaddam and M.-H. Yang, “Learning gender with support faces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 707–711, 2002. View at Publisher · View at Google Scholar · View at Scopus
  14. A. B. Graf and F. A. Wichmann, “Gender classification of human faces,” Biologically Motivated Computer Vision, vol. 2525, pp. 491–500, 2002.
  15. Z. Fan, M. Li, and Y. Lu, “An efficient image depth extraction method based on SVM,” International Journal of Multimedia and Ubiquitous Engineering, vol. 8, no. 3, pp. 275–284, 2013. View at Scopus
  16. Y. Jiang, L. Zhang, and L. Wang, “Wireless sensor networks and the internet of things,” International Journal of Distributed Sensor Networks, vol. 2013, Article ID 589750, 1 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. J.-H. Yoo, D. Hwang, and M. S. Nixon, “Gender classification in human gait using support vector machine,” Advanced Concepts for Intelligent Vision Systems, vol. 3708, pp. 138–145, 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Yu, T. Tan, K. Huang, K. Jia, and X. Wu, “A study on gait-based gender classification,” IEEE Transactions on Image Processing, vol. 18, no. 8, pp. 1905–1910, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Han and B. Bhanu, “Individual recognition using gait energy image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2, pp. 316–322, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. B. Li, X.-C. Lian, and B.-L. Lu, “Gender classification by combining clothing, hair and facial component classifiers,” Neurocomputing, vol. 76, no. 1, pp. 18–27, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Ballihi, B. Ben Amor, M. Daoudi, A. Srivastava, and D. Aboutajdine, “Boosting 3-D-geometric features for efficient face recognition and gender classification,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 6, pp. 1766–1779, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. X. Lu, H. Chen, and A. K. Jain, “Multimodal facial gender and ethnicity identification,” Advances in Biometrics, vol. 3832, pp. 554–561, 2006. View at Scopus
  23. K. Stanney, “Realizing the full potential of virtual reality: human factors issues that could stand in the way,” in Proceedings of the Virtual Reality Annual International Symposium, pp. 28–34, 1995.
  24. J. Schild, J. J. LaViola Jr., and M. Masuch, “Understanding user experience in stereoscopic 3D games,” in Proceedings of the 30th ACM Conference on Human Factors in Computing Systems (CHI '12), pp. 89–98, Austin, Tex, USA, May 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. G. N. Yannakakis and J. Hallam, “Real-time game adaptation for optimizing player satisfaction,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 1, no. 2, pp. 121–133, 2009. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Kim, M. Lee, S. Nam, and J. Park, “User interface of interactive media art in a stereoscopic environment,” Human Interface and the Management of Information. Information and Interaction for Learning, Culture, Collaboration and Business, vol. 8018, no. 3, pp. 219–227, 2013. View at Publisher · View at Google Scholar · View at Scopus
  27. Z. Taha, H. Soewardi, and S. Z. M. Dawal, “Color preference of the Malay population in the design of a virtual environment,” in Proceedings of the 18th International Conference on Virtual Systems and Multimedia: Virtual Systems in the Information Society (VSMM '12), pp. 545–548, September 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. L. Ellis and C. Ficek, “Color preferences according to gender and sexual orientation,” Personality and Individual Differences, vol. 31, no. 8, pp. 1375–1379, 2001. View at Publisher · View at Google Scholar · View at Scopus
  29. N. C. Silver and R. Ferrante, “Sex differences in color preferences among an elderly sample,” Perceptual and Motor Skills, vol. 80, no. 3, pp. 920–922, 1995. View at Scopus
  30. C. M. Zaroff, M. Knutelska, and T. E. Frumkes, “Variation in stereoacuity: normative description, fixation disparity, and the roles of aging and gender,” Investigative Ophthalmology and Visual Science, vol. 44, no. 2, pp. 891–900, 2003. View at Publisher · View at Google Scholar · View at Scopus
  31. G. S. Hubona, G. W. Shirah, and D. G. Fout, “3D object recognition with motion,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI EA '97),, pp. 345–346, Atlanta, Ga, USA, 1997.
  32. J. R. Cooperstock and G. Wang, “Stereoscopic display technologies, interaction paradigms, and rendering approaches for neurosurgical visualization,” in Stereoscopic Display and Applications XX, vol. 7237 of proceeding of SPIE, pp. 723–703.
  33. Y. Kim, S. Nam, and J. Park, “Interactive artwork with adaptive depth velocity in a stereoscopic environment,” Advanced Science and Technology Letters, Game and Graphics, vol. 39, pp. 69–72, 2013.
  34. G. Fanelli, M. Dantone, J. Gall, A. Fossati, and L. Van Gool, “Random forests for real time 3D face analysis,” International Journal of Computer Vision, vol. 101, no. 3, pp. 437–458, 2013. View at Publisher · View at Google Scholar · View at Scopus
  35. P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004. View at Publisher · View at Google Scholar · View at Scopus