Artificial Intelligence and Edge Computing in Mobile Information SystemsView this Special Issue
Face Reconstruction Based on Multiscale Feature Fusion and 3D Animation Design
The development of the Internet of Things and 3D technology promotes the wide application of face models in 3D animation. However, because the expression is inconsistent with the facial muscle movement, the reconstruction results may be far from the real appearance in the process of reconstructing the face appearance. Therefore, this paper proposes a character expression simulation model under the framework of 3DS Max. According to the relationship between head bones and muscles, a facial muscle motion model was established. Then, the expression simulation design of the original three-dimensional animation character “yaya” was carried out under the framework of 3DS Max technology. The experimental results of “yaya” facial expression test showed that the face simulation model using this method not only has vivid and natural expression but also conforms to the law of facial muscle movement, which provides an important reference for the construction and application of 3D face model.
With the rapid development of the Internet of Things technology, face recognition and reconstruction have attracted much attention [1–3]. 3D face reconstruction has become a research hotspot because of its wide application in computer vision, face recognition, face animation, computer games, and other fields [4, 5]. As a specific object, the human face has made great progress in the construction of its 3D model in recent years [6–8]. Meanwhile, with the development of computer technology, virtual characters often appear in 3D animation design and they are widely used in all kinds of animation [9, 10]. In these studies, the virtual character expression is the key point that the virtual character can arouse the audience’s emotional resonance, which is also an urgent problem to be solved in the 3D animation design. Foreign large-scale animation companies and studios usually use facial expressions to simulate facial expressions. In other words, by setting sensors on the face, the computer model can be defined in advance, so that the virtual character can produce subtle facial expressions. The animators can manually adjust the keyframes and handle the details according to the moving data of the facial expression . Therefore, the virtual characters in Europe, America, and other countries are vivid and lovely, with outstanding personalities and both body and spirit. Due to the problems of capital and production cycle, there is no expensive facial expression capture equipment to help animators create virtual human expressions, and facial expressions often appear in the corners of the mouth, inconsistent dialogue, dull facial expressions, and so on [12, 13]. Therefore, how to create more vivid virtual character expressions in the current environment has become the focus of 3D animation design research.
Face reconstruction is very important for biometric recognition because the estimation of the pose, expression, and illumination can be improved by an accurate individual-specific model. Some researchers have proposed more accurate 3D face algorithms, the most successful of which is based on the 3D deformation model (3DMM) . It is found that because the face is nonrigid and deformed due to aging, expression changes, and muscle changes, the research on face reconstruction has attracted extensive attention in recent years [15–18]. According to the research progress of face reconstruction, face reconstruction often needs to consider many different factors according to the input type and the required level of detail. Table 1 lists the most common face reconstruction methods in the past. According to different research purposes and backgrounds, there are many scenes and methods of face reconstruction. In practical application, people often use a variety of methods to overcome the shortcomings of a single method.
Linear shape and texture fitting algorithms have been mainly used in previous studies. Some scholars use single front photos to reconstruct 3D face models, which only need general facial expression and normal lighting [19, 20]. Most of the existing studies use the external features of the 3D face to build a more realistic virtual face model under different conditions, but these methods are difficult to truly reflect the actual expression features of the face in the state of human motion.
2. State of the Art
At present, large-scale animation companies in foreign countries are using expensive expression capture devices for virtual character expression simulation work. The method is used in many big movies; for example, in the animation film “final fantasy” all the digital virtual human characters are made through motion capture so that the digital actor can reach the effect of the real person, which can be called a landmark work . Then, the appearance of “avatar” created the facial expression capture technology in the action capture technology in 2010. Thus, the creation of virtual characters is vivid and flexible. The expressive techniques are rich and delicate, and the characters are rich in personality, which shows the lifelike characteristics of virtual characters for the majority of the audience .
Compared with foreign mature 3D animation production technology and capital investment, domestic investment in animation is still insufficient. Therefore, the domestic research of animation is still at the level of traditional 2D animation or low-cost 3D animation. The animation has less technical content , which means that domestic animation rarely deals with the character’s expression, and rarely makes up for the expression through lens and sound effects. When the role expression must be displayed, domestic animation companies usually use three-dimensional morph to deal with the expression of the role. That is, the gradient is changed by interpolation between two 3D objects, so as to obtain the expression of the 3D model . The expression of the role is simple and lacks vivid natural sense, and the expression of the role is restricted by the key expression, so it is difficult to deal with the specific expression. With the development of computer technology and 3D animation technology, the 3D animation technology in China has also improved. In 2006, our country’s “magic Bus ring” is the first original 3D animation film in China. It is also the first 100 million 3D animations produced in China . In this animated film, a large number of facial expression capture technologies are used to describe the expressions of virtual characters, but there is a lack of relevant experience and technology in the production of character expressions. Because the role of the production is not natural and vivid, the relevant technology needs to be further studied .
3.1. Appearance of a Character
The simulation of virtual character expression in 3D animation needs to master and understand the related knowledge of anatomy, for example, traditional painting learning needs to understand the internal structure and relationship of the thing and study the relationship between the muscle and facial expression of the human face so as to study the expression details of virtual characters . Usually, the construction of the character expression simulation model is based on anatomy. Therefore, when building a face model, we need to understand not only the bone structure of the head but also the composition of muscles and the corresponding expression changes.
The first is to understand the head shape structure, as shown in Figure 1, which is the human head figure structure. The recognition of the shape of the head is based on the structure and shape of the skull. The shape of the head will vary greatly depending on age and gender. For example, men’s skulls are larger than women’s, and in families where quarrels often occur, men are more obvious than women. The head can be divided into the neurocranium and the cranial. The cerebral cranium is spherical and the following facial cranium is persistent, and the two parts together constitute the basic shape of the head .
As shown in Figure 2, it is the schematic diagram of the head structure. As can be seen from Figure 2, most of the head muscles belong to the weak facial muscles, which determines facial expressions and the other is a complete head shape together with the skull .
According to the function of facial muscles, the head can be divided into two types: one is the muscle responsible for facial expression, and the other is the muscle responsible for chewing. The muscles in the expression are mainly the frontal muscle, the orbicularis oculi, the orbicularis muscle, the deltoid muscle, the lower lip muscle, and the zygomatic muscle, which create all the facial expressions and movements of the human face . The muscles responsible for chewing include the teeth muscle and the masseter muscle, which have a direct relationship with the motion movement. It can be seen that the production of facial expressions is not the result of a single muscular movement. Any facial expression is produced by the facial muscles. Due to the limitation of the head, the movement of facial muscles is smaller than that of other parts of the body. The movement of the facial muscles is almost without a straight line with the special distribution and facial muscle movement . Therefore, in the process of constructing the simulation model of virtual character expression, we should design according to the distribution and movement of facial muscles and focus on the movement effect of the muscles around the joint when the joint moves, so as to design a realistic facial expression movement.
3.2. Design of Role Expression Simulation System Based on 3DS Max Technology
The 3DS Max technology has the following two methods to design the role expression simulation. One is based on the fusion deformation animation of the target object, which is a special form of animation expression. In the three-dimensional space, the target object is deformed into another object with different shapes. In the simulation design of the role expression, the target object is set as the facial expression status generated by individual facial muscles, and then, the multiple targets have a compound effect on the grid so as to obtain the expression of the common movement of muscles. The method has the advantages of simple manipulation and intuitive effect and it can store and call at any time, but its shortcoming is that the role expression can only be carried out according to preset deformation targets. The details of the performance are not ideal. The second method is to simulate the facial expression by bone and facial binding. The method is directly using the controller to control the mesh deformation. With the accurate setting of the premise, it is good to control details and the grid can be deformed at the same time. Its shortcomings lie in the fact that the manipulation is not convenient and the parts need to be adjusted to get an expression. In this paper, according to the requirements of the virtual character of 3DS animation design, the second method of 3DS Max is chosen to simulate the role expression simulation design. The 3D design of facial expression simulation based on 3DS Max technology needs to create the skeleton. The bone creation can be divided into two parts. One part is the control of the skeleton, which is the control of the head, neck, lower collar, and the whole skeleton of the body. The other part is the facial expression bone, which is the real control of the deformed skeleton of the face. The skeleton of the rest of the virtual character is still using the Bipde bone system. Finally, the two parts of the skeleton are connected to realize the animation of the character. The following is the construction of the muscle model in 3DS Max. In general, the characteristics of the facial muscle movement analysis can be divided into three types of muscle, which are stretching the linear muscle, the contraction of the sphincter, and the plane muscle. Then, according to the characteristics of the movement direction of these muscles, the vector model is established. The skeleton structures of the lower layer are independent of each other. In particular, in this model, each muscle vector has its own influence domain and its influence will be controlled by the radius distance function of other muscle attachment point radius. Figure 3 is a diagram of the linear muscle, where the fixed endpoint of the muscle vector is V1 and the other end of the muscle vector is V2; the solution of these two variables is directly obtained by the calculation formula of the normal vector. means any point in the muscle influence field. represents the position after moves. represents the distance between the point and the fixed endpoint. Ω represents the maximum angle in the impact field. represents the angle of the vector distance to the mainline of action of the point. When the muscle is engaged in a contractile motion, the displacement expression of P to P′ is as shown in
K is the constant, and
As shown in Figure 4, which is a diagram of the sphincter, the displacement movement of the point on the sphincter is calculated by the following equation:where is expressed as the center of gravity and represents the elastic parameter. The definition of is shown as in equations (3) and (4), respectively:where is a threshold. That is, when is less than , is zero. represents a weighted distance, which is defined as shown in where and , respectively, represent the ordinate and abscissa of , while and represent the radius of the length of the ellipse in Figure 4.
Figure 5 is a schematic view of the plane muscle. The width of the plane muscle is expressed as . The length of the plane muscle is expressed as . The point of the impact field is . The displacement of the movement is expressed as . The two endpoints of the maximum line of the muscle action are expressed as . The two endpoints of muscle action are expressed as . The two endpoints of the centerline are expressed as . is the distance between the point and the action line. As shown in equation (6), the calculation method is shown as follows:where represents the parameter, which is depending on the specific circumstances. The expression of the function is shown in equations (7) and (8):
The index is the parameter selected by itself.
The creation of a 3D animation virtual character and the simulation of character expression are completely created and represented by computer 3D graphics technology. Thus, the idea is clear and the correct project production process is conducive to the smooth completion of the task in order to avoid the problems of process error or live sequence confusion, time-consuming, laborious, and high cost in the production process. Then, before the simulation design of 3D animation role expression, we need to design the production principle in order to formulate the process according to the actual technology and needs. As shown in Figure 6, this is the schematic diagram of this article. As can be seen from the diagram, the production principle is based on the 3D animation role expression needs. We use a scientific polygon model to create routing and assign map coordinates at the same time. Then, the ZBrush carves the normal details and uses PS for the rendering of the material. Then, it is used to drive the facial expression and the corresponding system of the body and set up the skin and the movement left behind the animation output. Using the face role model and combined with the relevant tools provided by 3DS Max, the simulation of facial expression in motion can be effectively realized.
4. Result Analysis and Discussion
This paper mainly studies the application of the role expression simulation under the framework of 3DS Max in 3D animation design, so the application test in the experiment is based on the 3DS Max technology to create an original 3D animation character “yaya” expression simulation design. The following is the specific process of 3D animated character creation, as shown in Figure 7. By initializing the relevant external face feature information and using the face role simulation model established in this paper, we can realize the real simulation of faces in different scenarios on the 3D platform.
Firstly, according to the above production principle and the requirements of character expression simulation, a 3D animation character production process as shown in Figure 7 is developed. According to the production process, the facial muscle and human bone system were made.
Secondly, a medium precision model is established under the 3DS Max framework, and ZBrush is used to subdivide the model and make the corresponding drawing normal map so that the middle model can reflect the detailed effect of the high-precision model and enhance the realism of the character from a visual point of view.
At the same time, optimizing the scene in 3D animation is conducive to the driving of the character model and the rendering of the final model and saves a lot of system resources. Then, a large creation process for the role head is shown in Figure 8. In the creation of a large head, the main structure is the human body. That is, we adjust the basic shape of the head and skull from a rectangle and then complete the basic shape of the head from the whole to the part through the stretching and tangent of the polygon.
In 3DS Max software, the model is decomposed on the whole surface, so we need to strictly set and adjust the wiring according to the muscle structure, so as to effectively plan the number of subdivision surfaces and obtain a better normal effect. After adjustment, the wiring is more reasonable and the spacing is larger, which can meet the needs of 3D animation virtual character expression simulation. As shown in Figure 9, the wiring distribution diagram of the final head form is expressed.
Then, it is necessary to construct the role of facial bones to ensure that the role of bones and joints can not only be carried out in separate motion but also make the surrounding joints produce a linkage effect so that the role of facial muscle movement can be truly simulated. Next, you need the 3D animated character “yaya” to bind the bones and the skin of the face model to guide the movement of the face mesh. Because the facial expression skeleton in the head bone is mainly responsible for controlling the movement of the facial mesh, it is necessary to select from the facial expression skeleton when binding the skin. Although skin production needs to be continuously adjusted to obtain satisfactory results, the adjustment process is to choose from the whole to the local order. Figure 10 shows part of the face skin adjustment process.
In the process of thinning the skeleton envelope, this paper mainly uses the brush weight method to increase the rationality and the natural sense of the skeleton-driven mesh in motion time. It is necessary to pay attention to exaggerated weight adjustment in facial expression and joint movement reaching the limit, so as to ensure the correctness of mesh deformation and avoid the problem of abnormal deformation of joints when moving in a small range.
Finally, the facial expression of the three-dimensional animated character “yaya” is simulated and tested. Figure 11 shows the expression effect of the “yaya” 3D animation. According to the simulation results, even if it is adjusted to the limit of facial expression, the expression simulation effect of the 3D animated character “yaya” is very good. The facial expression system based on the 3DS Max framework can meet the needs. Although the facial movement is relatively large in the eyes, mouth, chin, and other parts, in terms of expression “yaya” the facial expressions of these parts are more natural, the process of facial muscle deformation is more real, and the expression of characters is more vivid than traditional 3D animation.
Compared with 3DMM, which models the face space through a single Gaussian component and uses multiple mixed components , the method proposed in this paper not only effectively improves the accuracy of face feature recognition but also achieves good recognition results for all postures in motion. The accuracy of this method is tested by using the face model constructed in the paper, which shows that the reconstructed 3D face shape is closer to the real 3D shape than the simulation results of existing methods. From the above model simulation results, the face reconstruction method proposed in this paper is implemented by changing the 3D face shape in space and optimizing the 3D shape and its pose as well as expression per frame based on 3DS Max. Compared with the existing methods, this method provides higher simulation accuracy, and the processing time is much faster than the recent related modeling methods when implemented on GPU [24–27].
With the development of 3D animation technology and the Internet of things, not only the 3D animation role but also the three-dimensional animation role in the fields of film production and advertising will appear in the animation. However, compared with developed countries, our country has less investment in the field of 3D animation. Limited by the objective environment, the development of 3D animation character expression in China lags behind. Expression and dialogue are seriously inconsistent; even the animation effect is not a natural reality, which can not cause the audience’s resonance. Therefore, how to improve the simulation effect of 3D animation role expression in the current adverse environment has received attention in various fields. In this paper, the construction of 3D animation character expression under the framework of 3DS Max technology is proposed. The muscle motion model and associated system are constructed according to the requirements and the skeletal muscle movement rules. Finally, through the simulation of the original three-dimensional animation character “yaya” simulation expression test, the 3DS Max technology framework of the role expression simulation can improve the authenticity of the three-dimensional animation role and natural wake. The facial muscle movement is more consistent with the muscle movement rules and makes the role of the role more vivid, avoiding the facial skeleton in the process of deformation abnormal problems.
Some preliminary results are obtained by the reconstructed face models proposed in this paper. While different face effects are usually determined by face 3D reconstruction models, this study has a certain reference value for the in-depth study of 3D face reconstruction. As a preliminary exploration, this study inevitably has limitations, which points out the direction for further research in the future. In order to truly reflect facial expressions, different feature fusion methods are usually used to deal with different facial expressions and other morphological features [28–30]. Therefore, how to effectively integrate the local features of the human face will be the main problem to be solved in the future.
The labeled dataset used to support the findings of this study is available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
The authors would like to acknowledge the Project of Shaanxi Provincial Department of Education “Research on the Innovative Application of Shaanxi Fengxiang Clay Sculpture Symbol Elements in Contemporary Animation Character Design under the Strategy of Cultural Revitalization” (no. 19JK0895).
Z. Bhati, A. Waqas, M. Karbasi, and A. W. Mahesar, “A wire parameter and reaction manager based biped character setup and rigging technique in 3Ds max for animation,” International Journal of Computational Geometry and Applications, vol. 5, no. 2, pp. 21–36, 2015.View at: Publisher Site | Google Scholar
J. Grant, K. Güçlü, Ç Ünal, and E. Yakupoğlu, “An example for 3D animated character design process: the lost city antioch,” Procedia-Social and Behavioral Sciences, vol. 122, no. 2, pp. 65–71, 2014.View at: Google Scholar
B. Popović and M. Dimitrijević, “3D character modeling and animation at the faculty of technical sciences in novi sad,” Information and Management, vol. 40, pp. 45–50, 2011.View at: Google Scholar
R. Newcombe, D. Fox, and S. Seitz, “Dynamic fusion: reconstruction and tracking on non-rigid scenes in real-time,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 343–352, Boston, MA, USA, June 2015.View at: Google Scholar
K. H. Su and K. S. Rae, “Studies on color analysis of character gender in kids TV animation - mainly with analysis on 3D animation program for kids from EBS channel,” Journal of Digital Design, vol. 14, pp. 59–68, 2014.View at: Google Scholar
O. Villar, Learning Blender: A Hands-On Guide to Creating 3D Animated Characters, Pearson Schweiz Ag, Zug, Switzerland, 2nd edition, 2017.