Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2018, Article ID 9861697, 11 pages
https://doi.org/10.1155/2018/9861697
Research Article

Mechanical Energy Expenditure-based Comfort Evaluation Model for Gesture Interaction

School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an 710072, China

Correspondence should be addressed to Chen Zheng; nc.ude.upwn@gnehz.nehc

Received 14 August 2018; Accepted 31 October 2018; Published 30 December 2018

Academic Editor: Carmen De Maio

Copyright © 2018 Wenjie Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

As an advanced interaction mode, the gesture has been widely used for the human-computer interaction (HCI). The paper proposes a comfort evaluation model based on the mechanical energy expenditure (MEE) and the mechanical efficiency (ME) to predict the comfort of gestures. The proposed comfort evaluation model takes nineteen muscles and seven degrees of freedom into consideration based on the data of muscles and joints and is capable of simulating the MEE and the ME of both static and dynamic gestures. The comfort scores (CSs) can be therefore calculated by normalizing and assigning different decision weights to the MEE and the ME. Compared with the traditional comfort prediction methods based on measurement, on the one hand, the proposed comfort evaluation model makes it possible for providing a quantitative value for the comfort of gestures without using electromyography (EMG) or other measuring devices; on the other hand, from the ergonomic perspective, the results provide an intuitive indicator to predict which act has the higher risk of fatigue or injury for joints and muscles. Experiments are conducted to validate the effectiveness of the proposed model. According to the comparison result among the proposed comfort evaluation model, the model based on the range of motion (ROM) and the model based on the method for movement and gesture assessment (MMGA), a slight difference can be found due to the ignorance of dynamic gestures and the relative kinematic characteristics during the movements of dynamic gestures. Therefore, considering the feedback of perceived effects and gesture recognition rate in HCI, designers can achieve a better optimization for the gesture design by making use of the proposed comfort evaluation model.

1. Introduction

Nowadays, as an advanced interaction mode, the gesture has been widely used for the human-computer interaction, because it is more natural, convenient, and efficient than the traditional input modes, such as mouse, keyboard, handle, etc. Through the application of gesture-based interaction technique, the computer translates the gestures into control commands. Especially in the domains of video games and smart phones, the gesture-based interaction mode achieves great success by virtue of the favorable users’ experiences. By means of the gestures, users can manipulate the virtual objects [1, 2], acquire the remote targets [3, 4], select the menus [57], type the text, etc. Therefore, the gesture interaction has attracted widespread research interests, such as biomechanical modeling, comfort evaluation, gesture design, gesture recognition and gesture-based HCI. As a result, on the one hand, the comfort of gestures plays a significant role to estimate the load of muscles in order to ensure operators’ safety and prevent them from getting injured. On the other hand, in the gesture design, the ergonomic levels of gestures directly affect operating comfort, convenience, and efficiency, and comfortable gestures result in a good matching between operators and their muscles, so that their fatigue can be reduced and working hours can be extended. Nowadays, the comfort evaluation in gesture design has attracted attention from both academia and industry. However, current studies do not fully account for the complex reality of human gesture. The authors propose a mechanical energy expenditure-based model which considers the influences of both static and dynamic gestures to help designers evaluate the comfort of gestures during gesture interaction process. Firstly, the proposed comfort evaluation model provides designers with a quantitative value for the comfort of gestures without using EMG or other measuring devices; secondly, from the ergonomic perspective, the results obtained from the proposed comfort evaluation model can be regarded as an intuitive indicator to predict which act has the higher risk of fatigue or injury for joints and muscles, so as to reduce operators’ fatigue and extend their working hours.

2. Related Works

In the context of gesture interaction, the comfort is defined as the level of an operator’s well-being when he/she interacts with his/her working environment. However, this level is difficult to detect or measure because it is affected by the operator’s subjective feelings and individual judgment [8]. Currently, there are mainly four classes of solutions to the problem of comfort evaluation: (1) model based on the ROM, (2) model based on the ROM and movement data, (3) methods based on software tools, and (4) measurement based on devices or sensors.

For the comfort evaluation model based on the ROM of joints, various comfort evaluation models, such as JAI [9], RULA [10, 11], LUBA [12], REBA [13], OCRA, NERPA [14],UNE-EN 1005 [14], etc., have been developed, which have been applied on the ergonomic design [1520] in HCI. Above models provide semiquantitative indicators because only static gestures are taken into consideration. However, the operator’s comfort always varies with the change of joint angles during the task execution process. Thus, the dynamic gesture is another key factor which influences the operator’s comfort. In order to evaluate the dynamic gestures, current studies have proposed various comfort evaluation models [2125]. Andreoni et al. [22] present a MMGA model for the ergonomic ranking of motor tasks. The model is derived from the LUBA model that is evaluated by the static gesture. Weede et al. [23] presents the criteria for skill evaluation and ergonomic conditions with assigned weights and proportions. However, there is a lack of biomechanical information in the criteria. Keyvani et al. [25] use a digital human modeling (DHM) tool to evaluate the ergonomic risks. However, the tool cannot be modified as needed. Besides, software tools (e.g., JACK [16], CATIA [26], etc.) and measuring devices or sensors (e.g., EMG, motion capture, pressure distribution measurement system [27], etc.) have been developed to measure the comfort of gestures.

In general, current studies provide valuable experiences in the comfort evaluation. However, limitations still exist in the propositions previously reviewed. Table 1 shows the advantages and limitations of each proposition. The ROM-based model evaluates the comfort of gestures based on static gestures. The comfort of gestures can be directly obtained from the calculation results of each model, which are very convenient and designers do not need many professional skills, but the dynamic gestures and the biomechanical information are not considered by the ROM-based model. Moreover, the indicator for evaluating the comfort obtained from the ROM-based model is discontinuous, so it cannot provide a precise quantitative value. The ROM and movement data-based model retains the advantages of ROM-based model. Moreover, it takes both static and dynamic gestures into consideration, but the model is complex, which requires certain professional skills. Both of the software-based method and the sensors-based method can provide designers with precise quantitative values, but their costs are very high and their applications also need more professional skills.

Table 1: Advantages and limitations for comfort evaluation models.

After analyzing the advantages and limitations of previous studies, the authors propose a comfort evaluation model based on the MEE to achieve a better optimization for the gesture design. As an important biological characteristic, energy expenditure has been studied by researchers in the domain of ergonomics. The accurate muscle energy expenditure is essential for evaluating and analyzing the comfort of gestures. Umberger et al. [28] present a model of human muscle energy expenditure for predicting the thermal and mechanical energy liberation by simulating muscle contractions. However, this model has not been applied to ergonomic analysis. Kistemaker et al. [29] present an energy expenditure model, but the application of the model on comfort of gestures is not mentioned in the study. Battini et al. [30] apply the motion energy system to estimate the energy expenditure for predicting the ergonomic level. However, this study believes that metabolic energy expenditure depends on the gender, body weight, load weight, movement position of arms, velocity, duration, etc., rather than the muscle contraction. Wang et al. [31] propose an approach to relieve operator’s fatigue in an assembly line based on the metabolic energy expenditure. However, the model only focuses on the static gestures.

The paper focuses on the comfort evaluation model based on the MEE to predict the comfort of gestures. By using the real musculoskeletal data and mass-inertia characteristics, the biomechanical model of upper limb is constructed to simulate the human gestures and calculate the MEE and ME of gestures, in which both static and dynamic gestures are taken into account. The comfort scores can be therefore calculated by normalizing and assigning different decision weights to the MEE and the ME, which provide a quantitative and intuitive indicator for evaluating the comfort of gestures. Finally, the experiments are conducted to validate the effectiveness of the proposed model. Next section presents the details of the proposed comfort evaluation model.

3. Comfort Evaluation Model

3.1. Musculoskeletal Kinematics Modeling

Biomechanical characteristics of humans should be taken into consideration so that the model can accurately simulate the human gestures. One hypothesis which is adopted in the proposed model is that the upper limb can be simplified as multirigid body hinge structure [32]. In order to avoid the complexity of human body, the authors simplify the upper limb as the mechanical model with three segments and seven degrees of freedom. In Figure 1, the joint angles of model are expressed by , respectively. The F-E represents the flexion-extension of joint, the P–S represents the pronation-supination of joint, and the AD-AB represents the adduction-abduction of joint.

Figure 1: Human upper limb model.

According to the biomechanics, the muscles attach different skeletons and control the movements of skeletons around the joints [33]. Considering the biomechanical information related to the muscles of right upper limb, the authors propose an upper limb musculoskeletal model based on the human upper limb model presented previously, which consists of three segments, sixteen one-joint muscles, and three two-joint muscles (Figure 2). The model shows the relationship between the muscles and the joint motions; therefore it can accurately simulate the muscle contractions and the three-dimensional movements. Table 2 shows the corresponding relationship between the muscles and the joint motions.

Figure 2: Upper limb musculoskeletal model.
Table 2: Relationship between muscles and joint motions.

The muscle can shorten to produce a concentric contraction and lengthen to produce an eccentric contraction. In order to establish the musculoskeletal kinematic model, the kinematics of muscles and joints can be calculated by the following formula:where are the lengths of the nineteen muscles, are the angles of the seven joints. Taking the derivative of formula (1), the contraction velocities of muscles can be obtained aswhere and , respectively, are the vector of the velocities of the nineteen muscles and the vector of the angle velocities of the seven joints, respectively and represents the Jacobian matrix from the joint space to the muscle space. According to the virtual work principle, the joint torques can be expressed as [34]where represents the vector of the joint torques generated by the muscles, represents the vector of the tensile forces of muscles, and is the moment arms of muscles. According to the kinematics, the Jacobian matrix equals the moment arms of muscles.

3.2. Biomechanics Modeling

The relationship of force-velocity links the contraction velocities at which the muscles change their lengths to the forces generated by muscles. The contraction velocities of muscles can be obtained by the derivative of the muscles’ lengths during the movements of dynamic gestures. The force-velocity relationship can be described by Hill’s muscle model [35]:

Thus, the tensile forces of muscles can be represented as follows:where is the vector of the tensile forces of muscles; is the vector of the contraction velocities of muscles; is the vector of the maximum isometric forces; is the vector of the coefficients of contraction heat; equals ; and is the vector of the maximum velocities when = 0. Based on the experiences and achievements of previous studies [36], the empirical constants can be calculated as follows:

According to the Newton–Euler formulation, the joint torque is affected by multiple factors, such as inertial force, centrifugal force, gravity force, muscle force, and the limit of ligaments to the joint. The relationship between the forces applied to the limbs and the resulting motions of the limb segments can be expressed by the following formula:where is the vector of joint torques; are the vectors of the joint coordinates, the velocities, and the accelerations, respectively; is the inertia matrix and is a vector of inertial torques; is the coriolis matrix and is the vector of coriolis torques; is the vector of gravity forces; is the moment arms of muscles, is the vector of muscle forces and is the vector of moment of muscle force; and is the vector of ligament torques. Frictions are not taken into consideration in the formula, because they are so small when compared with muscle forces.

3.3. Mechanical Energy Expenditure

In classical mechanics, work is the application of a force over a distance. The change in energy of an object is equal to the work done on the object. However, in biomechanics, the main topic of interests is not the total power or the work done on the body, but the MEE. The MEE of human movements is a worthy reference that contains rich biomechanical information. During the symmetric movements, the total work done on the body is zero, but the MEE is not zero. Generally, the muscles energy expenditure consists of the MEE and heat dissipation. Because the function of muscles is mainly to generate mechanical forces, the heat dissipation can be neglected during muscle contraction and the authors concentrate on a pure mechanical approach about human energy expenditure.

For the joint movements [29], the mechanical power of joints can be calculated as the sum of scalar product of the vector of joint torques and the angular velocities:where represents the mechanical power of joints. The mechanical work of joints can be calculated as the sum of the integral of the product of joint torques and the angular velocities over the duration from to :

The muscle never works alone. The joint motions are controlled by the muscle forces generated by the agonist and antagonist that do the positive work and negative work, respectively. Thus, the MEE can be calculated as a sum of integral of the absolute values of positive and negative works of joints:where and are the joint torques generated by the agonist and antagonist muscles, respectively. However, the model is limited and cannot account for the of static gestures, because the equals zero when equal zero.

Therefore, for dynamic gestures, the MEE is the sum of the integral of the sum of the absolute values of the power of the joint torques and the muscle forces, while for static gestures, the MEE is the sum of the absolute value of the product of the muscle forces of the agonist and antagonist and the duration [39]. The MEE can be optimized and expressed by the following formula:where and are the positive and negative joint torques caused by inertia, gravity, and ligament and and are the muscle forces generated by the agonist and antagonist.

The ratio between the mechanical work and the MEE is called mechanical efficiency, which is a significant indicator to evaluate performance of movements in ergonomics:

3.4. Comfort Score of Gestures

The comfort is defined as the level of an operator’s well-being when he/she interacts with his/her working environment. Comfortable gesture can greatly relieve fatigue so that operator’s working hours can be extended. Different joint angles correspond to different gestures, so ROM of joint is introduced to express the comfort by previous studies [14, 15, 22, 37, 38]. Generally, the ROM is divided into three levels: comfortable range, less comfortable range, and uncomfortable range, and the different levels are assigned different comfort scores. However, the method of ROM is more suitable for comfort evaluation of static gestures, because the quantitative comfort scores are not accurate enough for the whole range of joint motion. In order to obtain more accurate quantitative comfort indicators, the comfort evaluation model is proposed to predict the comfort of gestures based on the MEE of the static and dynamic gestures simultaneously.

The model not only associates the MEE, but also considers the ME as the influence factor of comfort. The comfort score is set between 0 and 10. The higher score indicates a better comfort. The CS corresponding to the MEE and ME can be calculated separately by normalizing as follows:where and represent the comfort scores for and , respectively. Thus, the can be calculated after assigning different decision weights to the two comfort scores:where is the comfort score and and are weights corresponding to the and separately.

4. Experiments

The experiments are conducted to simulate the MEE of the human upper limb for predicting the CS. The experiment process is shown in Figure 3. First of all, according to the input parameters, the movements of the upper limb musculoskeletal model are generated by the trajectory planning algorithm in joint space and Cartesian space, respectively. The musculoskeletal model is then applied to simulate during the movements of static and dynamic gestures. After the simulation, the biomechanical model is used to compute the MEE and the ME. Finally, the CS of gestures can be obtained to evaluate the comfort of gestures.

Figure 3: Experiment process represented by flow chart.
4.1. Joint Space Approach

In joint space, the parameter of joint angles (i.e., ) is the input of the model and the trajectories are generated by the trajectory planning algorithm for simulating the human gestures. All the joint angles lie in a reasonable range and satisfy the joint limits.

As shown in Figure 4(a1), the upper limb model is adopted to represent the gestures that move from the natural sagging state to the upper part of the body. The trajectories consist of gestures with different directions and amplitudes. In joint space, the natural sagging state is set as the initial position, whose parameter of joint angles is . The upper part of the body is a general description that indicates the end position, whose parameter of joint angles is By changing the parameter of joint angles, the upper limb moves from the initial position to the end position. As shown in Figure 4(a2), the upper limb model simulates the gestures that move from the left side to the right side of the body in different heights and distances. In joint space, the parameter of joint angles of the initial position is and the parameter of joint angles of the end position is . The trajectories represented by the parameters of joint angles are given in Table 3. The joint angles are interpolated into the continuous trajectories by quintic polynomial. The angle velocities and accelerations of initial and end positions are set to zero as the boundary conditions . Then, the continuous joint angles, the angel velocities, and the angel accelerations are obtained by the trajectory planning algorithm. Finally, the trajectories in joint space can generate the trajectories in Cartesian space by the forward kinematics.

Figure 4: Result of simulation experiments.
Table 3: Trajectories represented by parameters of joint angles in joint space.
4.2. Cartesian Space Approach

In Cartesian space, the upper limb model is used to simulate the gestures by Cartesian trajectory planning algorithm, and the joint angles can be calculated by the inverse kinematics. The joint angles lie in a reasonable range and satisfy the joint limits.

As shown in Figure 4(a3), the upper limb model is used to represent the human gestures to draw circles in front of the body. The circles have different radius and distances from the center of circle to the body, respectively. The input parameters of the simulation model is (x(i), y, z, r(j)), where the (x(i), y, z) is the coordinate of the circle center. x(i) and r(j) are two variables, and r(j) represents the radius.

As shown in Figure 4(a4), the upper limb model is used to represent the human gestures to draw equilateral triangles in front of the body. The triangles have different lengths and distances. The input parameters of the simulation model are (x(i), y, z, d(j)), where the (x(i), y, z) are the center coordinates of the triangles. x(i) and d(j) are two variable, and d(j) represents the side length of the triangle. The input parameters of trajectory planning algorithm in Cartesian space are given in Table 4.

Table 4: Input parameters in Cartesians space (unit: m).

For the circular trajectories, the coordinates of trajectories in Cartesian space are calculated by using the circular function. The triangle trajectories in Cartesian space are linear and can be fitted by the quintic polynomial. Finally, the Cartesian trajectories can be used to create joint trajectories by using the inverse kinematics.

5. Results and Discussion

5.1. Results

The MEE consists of the muscle mechanical energy, the kinematic energy, the gravitational potential energy, and the energy expenditure of the ligament. The upper limb model is created to establish the link between the comfort of gestures and the MEE of both static and dynamic gestures. The work of gestures, the MEE, and the ME are calculated for each gesture in experiments, and the results are shown in Figure 4.

Firstly, gestures of each experiment can form various trajectories that consist of four hundred trajectories with different parameters. The trajectories of gestures are visualized in the three-dimensional space, as shown in Figures 4(a1)–4(a4). In Figure 4(a1), the trajectories from the bottom to the top composed of different directions and heights can be obtained by changing the joint angles and . In Figure 4(a2), the trajectories from the left to the right composed of different heights and distances can be obtained by changing the joint angles and . Figure 4(a3) shows the circular trajectories that are obtained by changing the circular radius r(j) and the distance from the body to the center of the circles x(i). Figure 4(a3) shows the triangle trajectories that are obtained by changing the side length of the triangle d(j) and the distance from the body to the center of the triangle x(i).

Figures 4(b1)–4(b4) show the work of gestures, respectively, and the results also reveal the distribution regularities of the work of gestures along different trajectories. Figures 4(c1)–4(c4) reveal the MEE distribution regularities of gestures along different trajectories. Figures 4(d1)–4(d4) reveal the distribution regularities of the ME along different trajectories. The coordinates x and y correspond to the input parameters and the coordinates z corresponds to the values of the work of gestures, the MEE, and the ME with different colors. In Figures 4(c3), 4(d3), 4(c4), and 4(d4), the black areas represent the singular positions, where the muscle strengths are too large and the joint’s limit is exceeded.

The MEE and the ME contain rich biomechanical information, and thus, both of them can be used to predict the ergonomic level of the gestures. The CS is calculated based on the MEE and the ME to evaluate the comfort of gestures. As shown in Figure 5, the CS is set between 0 and 10. A higher score indicates a better comfort. The black areas are the singular positions that demonstrate an unfavorable ergonomic level.

Figure 5: Comfort score of gestures.
5.2. Comparisons

For a better understanding of the differences among the comfort evaluation models based on the ROM, the MMGA, and the MEE, the comparison results are shown in Figures 68. Three joint angles are chosen to demonstrate the differences.

Figure 6: Comparison of F-E joint of upper arm.
Figure 7: Comparison of AB-AD joint of upper arm.
Figure 8: Comparison of F-E joint of forearm.

The ROM of flexion-extension joint of upper arm is divided into three parts. According to simulation result obtained from the model based on the ROM, the ROM from 0° to 35° is comfortable, the ROM from 35° to 90° is less comfortable, and the ROM from 90° to 150° is uncomfortable, which are represented by the blue dotted lines in Figure 6. The black curve in Figure 6 represents the comfort score of gestures from 0° to 150° obtained by the proposed evaluation model. The model based on the MMGA provides a quantitative value for the ergonomic ranking, and the comfort indicator for each joint is computed through a spline fitting of the comfort ranks derived from the LUBA method along the ROM of joints. The comparison result shows both the consistency and the difference between the model based on the MEE with those based on the ROM and the MMGA. The gesture with a higher CS is located in the comfortable range while the gesture with a lower CS is located in the uncomfortable range.

In Figure 7, the ROM of abduction-adduction joint of upper arm is divided into three parts from −55° to 90°. The ROM from −5° to 25° is comfortable, the ROM from −20° to −5° and from 25° to 60° are less comfortable, and the ROM from 60° to 90° is uncomfortable. A slight difference can be found between the curves which represents the comfort score and the three comfort levels obtained by the model based on the ROM.

Figure 8 shows the comparison of flexion-extension joint of forearm that is divided into three parts from 0° to 140°. The ROM from 80° to 110° is comfortable, the ROM from 0° to 80° and from 110° to 115° are less comfortable, and the ROM from 115° to 140° is uncomfortable. Even though the general trend of the CS obtained from the model based on the MEE keeps consistent with the comfort levels based on the ROM and the MMGA, the difference among the comfort evaluation models based on the ROM, the MMGA, and the MEE can be also found in Figure 8 (e.g., the gesture with the highest CS is not located in the comfortable range of joint). The reasons for the differences among the ROM, MMAG, and MEE will be discussed in the following sections.

5.3. Discussion

The purpose of the paper is to develop a comfort evaluation model to predict the comfort gestures, which considers both static and dynamic gestures. The inertial force, the centrifugal force, the gravity force, the muscle force, and the limit of ligaments to the joint are considered as the factors which affect the comfort of gestures. The results can provide an effective support to predict the risk of fatigue and injury for joints and muscles.

As shown in Figures 4(b1), 4(c1), and 4(d1) and Figures 4(b2), 4(c2), and 4(d2), the relationships between the work of gestures, the MEE, and the ME with the joint angle are revealed. The flexion-extension joint of the shoulder has more impact on the work of gestures and MEE than the Abduction-Adduction joint, but has less impact on the ME than Abduction-Adduction joint. Figures 4(b3), 4(c3), and 4(d3) and Figures 4(b4), 4(c4), and 4(d4) show the relationships between the distributions of the work of gestures, the MEE, and the ME with the distance x, the side length d, and the radius r. The distance has more impact on the work of gestures and the MEE than the side length and the radius, but has less impact on the ME than the side length and the radius. The black areas represent the gestures that cause joint damage or muscle fatigue, because the muscle strengths are too large and the joint’s limit is exceeded. The results help designers to understand the biomechanical properties of the upper limb during the movements. Moreover, the results offer a quantitative and intuitive indicator to predict which act has the higher risk of fatigue or injury for joints and muscles.

Figure 5 shows the relationships between the comfort scores with the joint angle, the distance, the side length, and the radius. The black areas represent the gestures with unfavorable ergonomic level. In other words, the gestures in black areas should be avoided during the gesture interaction, because the joint has already attained its limit positions, which causes the joint damage and the muscle fatigue consequently. The results provide a quantitative indicator for the comfort of gestures and predict the risk of joint damage and muscle fatigue.

The comparisons among the comfort evaluation models based on the ROM, the MMGA, and the MEE are shown in Figures 68. The comfort evaluation model based on the ROM aims at the comfort evaluation of static gestures. The comfort evaluation model based on the MMGA intends to predict the comfort of dynamic gestures through a spline fitting of the comfort ranks for static gestures, but the relative kinematic characteristics during the movements of dynamic gestures, such as arm’s gravity and its inertia, the centrifugal force, the muscle force, and the limit of ligaments to the joint, are not taken into consideration. The proposed comfort evaluation model can account for the static and dynamic gesture from the perspective of MEE. The consistency among the flexion-extension simulation result of comfort score and that based on the ROM and the MMGA for the upper arm is shown in Figure 6. However, there exist differences between the abduction-adduction (or the flexion-extension) simulation result of CS and that based on the ROM and the MMGA for the upper arm (or forearm) (Figures 7 and 8).

Differences can be found while comparing the proposed comfort evaluation model based on MEE with the models based on ROM and MMGA. One major reason for such differences is the different modeling principles applied on the three evaluation models. The comfort scores from the models based on ROM considers only the static gestures; however, the proposed comfort evaluation model based on MEE considers the influence of both static and dynamic gestures. In other words, the comfort scores obtained from the model based on ROM represent the comfort levels of static postures, whereas the comfort scores of the model based on the MEE describe the comfort levels of dynamic actions. As for the model based on the MMGA, although it intends to consider the dynamic gestures, the comfort of dynamic gestures is predicated through a spline fitting of the comfort ranks for static gestures, and the arm’s gravity and its inertia, the centrifugal force, the muscle force, and the limit of ligaments to the joint during the movements of dynamic gesture are not taken into consideration.

6. Conclusion

The paper proposes a musculoskeletal biomechanical model based on the mechanical energy expenditure and the mechanical efficiency to predict the comfort of both static and dynamic gestures. On the one hand, the proposed comfort evaluation model supports the quantitative measure of comfort of gestures without using the electromyography or other measuring devices; on the other hand, from the ergonomic perspective, it provides an intuitive indicator to predict which act has the higher risk of fatigue or injury for joints and muscles, so as to reduce operators’ fatigue and extend their working hours. Experiments are then conducted to validate the effectiveness of the proposed model. The comparison result shows both the consistency and the difference between the model based on the MEE with those based on the ROM and the MMGA. The reasons for the difference are the ignorance of dynamic gestures in the model based on the ROM and the ignorance of kinematic characteristics during the movements of dynamic gesture in the model based on the MMGA.

In the future, the authors will adopt statistical methods, such as the statistical significance tests, to analyze the differences between the proposed evaluation model and those based on the ROM and MMGA. Furthermore, the research will be applied on the optimization of gesture design while considering the feedbacks of perceived effects and gesture recognition rate in HCI, so that the comfortable, efficient, and human-computer friendly interaction gestures can be achieved.

Data Availability

The data used to support the finding of this study (trajectory planning of upper limb motion) are included in the article (Tables 3 and 4). Previously reported data related to the ROM of joints were used to compare with the results of this study and are available at (doi: https://doi.org/10.1155/2015/896072). This prior study is cited at a relevant place within the text as references [16]. Previously reported data of comfort evaluation model based on the MMGA were used to compare with the results of this study and are available at (doi: https://doi.org/10.1007/978-3-642-02809-0_62). This prior study is cited at relevant place within the text as references [22].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Key Research and Development Program of the Natural Science Basic Research Plan in Shaanxi Province of China (grant number 2016TZC-G-4-3), the Natural Science Basic Research Plan in Shaanxi Province of China (grant number 2017JM5050), and National Natural Science Foundation of China (grant number 51805437).

References

  1. A. Badju and D. Lundberg, Shopping Using Gesture-Driven Interaction, Lund University, Lund, Sweden, 2015.
  2. N. Pretto and F. Poiesi, “Towards gesture-based multi-user interactions in collaborative virtual environments,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 42, pp. 203–208, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. N. X. Tran, H. Phan, V. Dinh et al., “Wireless data glove for gesture-based robotic control,” in Human-Computer Interaction Novel Interaction Methods and Techniques, vol. 5611, pp. 271–280, Springer Nature Switzerland AG, Basel, Switzerland, 2009. View at Google Scholar
  4. J. Liu, Y. Luo, and Z. Ju, “An interactive astronaut-robot system with gesture control,” Computational Intelligence and Neuroscience, vol. 2016, no. 1, Article ID 6040232, 8 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Dong, A. Danesh, N. Figueroa et al., “An elicitation study on gesture preferences and memorability toward a practical hand-gesture vocabulary for smart televisions,” IEEE Access, vol. 3, no. 15138343, pp. 543–555, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. T. Baudel and M. Beaudouin-Lafon, “Charade: remote control of objects using free-hand gestures,” Communications of the ACM, vol. 36, no. 7, pp. 28–35, 1993. View at Publisher · View at Google Scholar · View at Scopus
  7. C. Mignot and C. Valot, “An experimental study of future “natural” multimodal human-computer interaction,” in Proceedings of CINTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems, pp. 67-68, Amsterdam, Netherlands, April 1993.
  8. A. Naddeo, N. Cappetti, and O. Ippolito, “Dashboard reachability and usability tests: a cheap and effective method for drivers’ comfort rating,” in Proceedings of SAE 2014 World Congress and Exhibition, Detroit, MI, USA, April 2014.
  9. D. Kee and W. Karwowski, “The boundaries for joint angles of isocomfort for sitting and standing males based on perceived comfort of static joint postures,” Ergonomics, vol. 44, no. 6, pp. 614–648, 2001. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Mcatamney and E. N. Corlett, “RULA: a survey method for the investigation of work-related upper limb disorders,” Applied Ergonomics, vol. 24, no. 2, pp. 91–99, 1993. View at Publisher · View at Google Scholar · View at Scopus
  11. G. Abdollah, S. Ahmad, A. Roghayeh et al., “Ergonomic assessment of musculoskeletal disorders risk by rapid upper limb assessment (RULA) technique in a porcelain manufacturing factory,” Journal of Research and Health, vol. 4, no. 1, pp. 608–612, 2014. View at Google Scholar
  12. E. Escalona, M. Hernández, E. Yanes et al., “Ergonomic evaluation in a values transportation company in Venezuela,” Work, vol. 41, no. 1, pp. 710–713, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Kee and W. Karwowski, “LUBA: an assessment technique for postural loading on the upper body based on joint motion discomfort and maximum holding time,” Applied Ergonomics, vol. 32, no. 4, pp. 357–366, 2001. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Hignett and L. Mcatamney, “Rapid entire body assessment (REBA),” Applied Ergonomics, vol. 31, no. 2, pp. 201–205, 2000. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Sanchezlite, M. Garcia, R. Domingo et al., “Novel ergonomic postural assessment method (NERPA) using product-process computer aided engineering for ergonomic workplace design,” PLOS ONE, vol. 8, no. 8, Article ID e72703, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Deng, G. Wang, and B. Chen, “Operating comfort prediction model of human-machine interface layout for cabin based on GEP,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 896072, 13 pages, 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. A. Apostolico, N. Cappetti, C. D’Oria, A. Naddeo, and M. Sestri, “Postural comfort evaluation: experimental identification of range of rest posture for human articular joints,” International Journal on Interactive Design and Manufacturing, vol. 8, no. 2, pp. 109–120, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Looker and T. Garvey, “Reaching for holograms: assessing the ergonomics of the Microsoft™ Hololens™ 3d gesture known as the air tap,” Eeum Design Connects, vol. 10, pp. 504–511, 2015. View at Google Scholar
  19. F. Yang, Q. Zhou, A. Yang, H. Hu, X. Zhang, and Z. Liu, “Based on upper extremity comfort ROM of ergonomic methods for household products design,” in A Digital Human Modeling: Applications in Health, Safety, Ergonomics and Risk Management-5th international conference, vol. 8529, pp. 167–173, Springer Nature Switzerland AG, Basel, Switzerland, 2014. View at Google Scholar
  20. M. Annarumma, M. Pappalardo, and A. Naddeo, “Methodology development of human task simulation as PLM solution related to OCRA ergonomic analysis,” in Proceedings of 2nd IFIP Topical Session on Computer-Aided Innovation Held at the 20th World Computer Congress, pp. 19–29, Milano Italy, September 2008.
  21. T. K. Chen, An Investigation into Alternative Human-Computer Interaction in Relation to Ergonomics for Gesture Interface Design, De Montfort University, Leicester, UK, 2009.
  22. G. Andreoni, M. Mazzola, O. Ciani et al., “Method for movement and gesture assessment (MMGA) in ergonomics,” in Proceedings of International Conference on Digital Human Modeling held at the HCI International, pp. 591–598, San Diego, CA, USA, July 2009.
  23. O. Weede, F. Mohrle, H. Worn et al., “Movement analysis for surgical skill assessment and measurement of ergonomic conditions,” in Proceedings of International Conference on Artificial Intelligence, Modelling and Simulation, pp. 97–102, Madrid, Spain, November 2014.
  24. G. Andreoni, M. Mazzola, O. Ciani et al., “Quantitative body movement and gesture assessment in ergonomics,” International Journal of Human Factors Modelling and Simulation, vol. 1, no. 4, pp. 390–405, 2010. View at Publisher · View at Google Scholar
  25. A. Keyvani, H. Dan, L. Hanson et al., “Ergonomics risk assessment of a manikin’s wrist movements-a test study in manual assembly,” in Proceedings Second International Digital Human Modeling Symposium, pp. 1–6, Tokyo, Japan, May 2014.
  26. D. Li, G. Wang, and S. Yu, “Layout design of human-machine interaction interface of cabin based on cognitive ergonomics and GA-ACA,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 1032139, 12 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. M. Smulders, K. Berghman, M. Koenraads et al., “Comfort and pressure distribution in a human contour shaped aircraft seat (developed with 3D scans of the human body),” Work, vol. 54, no. 4, pp. 1–16, 2016. View at Publisher · View at Google Scholar · View at Scopus
  28. B. R. Umberger, K. G. Gerritsen, and P. E. Martin, “A model of human muscle energy expenditure,” Computer Methods in Biomechanics and Biomedical Engineering, vol. 6, no. 2, pp. 99–111, 2003. View at Publisher · View at Google Scholar · View at Scopus
  29. D. A. Kistemaker, J. D. Wong, and P. L. Gribble, “The central nervous system does not minimize energy cost in arm movements,” Journal of Neurophysiology, vol. 104, no. 6, pp. 2985–2994, 2010. View at Publisher · View at Google Scholar · View at Scopus
  30. D. Battini, X. Delorme, A. Dolgui, A. Persona, and F. Sgarbossa, “Ergonomics in assembly line balancing based on energy expenditure: a multi-objective model,” International Journal of Production Research, vol. 54, no. 3, pp. 824–845, 2016. View at Publisher · View at Google Scholar · View at Scopus
  31. X. Wang, J. IU, U. Hairong et al., “Research on operator fatigue improvement in assembly line based on metabolic energy expenditure,” Journal of Hunan University (Natural Sciences), vol. 37, no. 5, pp. 31–34, 2010. View at Google Scholar
  32. V. M. Zatsiorskiĭ, Kinetics of Human Motion, United States: Human kinetics, Champaign, IL, USA, 2002.
  33. W. Maurel, D. Thalmann, P. Hoffmeyer et al., “A biomechanical musculoskeletal model of human upper limb for dynamic simulation,” in Proceedings of the Eurographics Computer Animation and Simulation Workshop, pp. 121–136, Poitiers, France, September 1996.
  34. T. Fukunaga, M. Miyatani, M. Tachi, M. Kouzaki, Y. Kawakami, and H. Kanehisa, “Muscle volume is a major determinant of joint torque in humans,” ACTA Physiological Scandinavia, vol. 172, no. 4, pp. 249–255, 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. D. F. Haeufle, M. Günther, A. Bayer, and S. Schmitt, “Hill-type muscle model with serial damping and eccentric force-velocity relation,” Journal of Biomechanics, vol. 47, no. 6, pp. 1531–1536, 2014. View at Publisher · View at Google Scholar · View at Scopus
  36. K. Jovanovic, J. Vranic, and N. Miljkovic, “Hill’s and huxley’s muscle models - tools for simulations in biomechanics,” Serbian Journal of Electrical Engineering, vol. 12, no. 1, pp. 53–67, 2015. View at Publisher · View at Google Scholar
  37. J. F. Knight and C. Baber, “Effect of head-mounted displays on posture,” Human Factors, vol. 49, no. 5, pp. 797–807, 2007. View at Publisher · View at Google Scholar · View at Scopus
  38. S. Yazdanirad, A. H. Khoshakhlagh, E. Habibi, A. Zare, M. Zeinodini, and F. Dehghani, “Comparing the effectiveness of three ergonomic risk assessment methods-RULA, LUBA, and NERPA-to predict the upper extremity musculoskeletal disorders,” Indian Journal of Occupational and Environmental Medicine, vol. 22, no. 1, pp. 17–21, 2018. View at Publisher · View at Google Scholar · View at Scopus
  39. P. Bickford, V. S. M. Sreedhara, G. M. Mocko, A. Vahidi, and R. E. Hutchison, “Modeling the expenditure and recovery of anaerobic work capacity in cycling,” in 12th conference of the International Sports Engineering Association, Brisbane, Queensland, Australia, March 2018.